Jessie Daniels, a professor at the City University of New York, is the author of "Cyber Racism: White Supremacy Online and the New Attack on Civil Rights." She is a co-editor of the scholarly blog, Racism Review, and is on Twitter.

Americans have largely viewed rightwing extremism as a "fringe" problem, small enough to be ignored, dismissed, or at most, warily observed. But while rightwing extremism is not new, online sites that host a panoply of extremist rightwing views are growing in popularity.

One of these sites, Stormfront, has grown from 124,000 registered users in 2008 to over 300,000 today. And, because of the somewhat borderless quality of the Internet, our homegrown white supremacy is now available to a global audience with deadly consequences. The Southern Poverty Law Center has linked that site alone to some 100 hate crime murders.

One website that hosts extremist rightwing views has been linked to some 100 hate crime murders.

The Department of Homeland Security should treat white supremacy as a terrorist threat to the government, and monitor online hubs and websites that promote racial violence and hate. After all, the Reverend Clementa Pinckney, who was killed in last week’s Charleston church shooting, was a sitting state senator in South Carolina. His death constitutes an attack on a government official, and many strains of white supremacy are specifically and directly aimed at overthrowing the government.

Unfortunately, Homeland Security gutted their program for monitoring domestic terrorism in 2010, after conservatives objected to a “politically charged” leaked report. It is time to rebuild that program, and identify and outlaw the kind of online speech that can incite violence and cause real harm. This will take legislative action on Congress's part, as white supremacist rhetoric online does not forfeit its First Amendment protections in America unless it is joined with targeted threats or harassment, or incitement to illegality.

But there is a legal precedent: In 2003, the Supreme Court ruled that a burning cross is not protected speech, because it is meant to racially terrorize a group of people, and never as a democratic discussion.

In the same way, online speech that advocates for killing people because of their race, religion, gender or sexual orientation should not be protected. In evaluating online speech in Google searches, and on websites and social media, the question becomes: What constitutes a burning cross in the digital era?



Join Opinion on Facebook and follow updates on twitter.com/roomfordebate.

