Like just about everyone else, political extremists with the potential for violence have gone online. Everything from diatribes by known terrorists to recruitment videos make their first appearances on websites, leading many, both in government and out, wondering how best to tackle the dangers that this material (and the people behind it) poses. A new study by the The International Centre For The Study Of Radicalisation And Political Violence (ICSR) has performed an analysis of the issues and released a report that suggests the best way to deal with the material may be to tackle the offline activities of those who produce it.

The ICSR is a joint project formed by an international coalition of academic institutions. The current report, however, is focused on the British experience, suggesting it was primarily the product of people based at King's College, London. Although some of the material in the report is specific to the UK, a fair bit of it probably applies fairly generally to the experience in other nations.

One of the first of the things that probably applies to most countries is that a lack of a thorough understanding of the Internet heightens fear of what's happening online; without knowing the features and limits of online communications, people have exaggerated concerns. "Awful things are said to happen on extremist websites and in Internet chat rooms, but few are able to identify what exactly it is that causes so much concern," the authors wrote. "As a result, many of the policy proposals that are currently in circulation are either irrelevant or unworkable."

One example the authors provide is that many citizens think the same approaches that target sexual exploitation of children online should work with material associated with violent extremists groups. Not so, the authors conclude: "The comparison with efforts to counter child sexual abuse on the Internet is flawed, because much of the material involved in child sexual abuse is clearly illegal and there are no political constituencies which might be offended if repressive action is taken against it." The typical content of extremist groups contains a mixture of threats and speech that is protected in most liberal democracies due to its political nature. Accordingly, a blanket attack on extremist content is likely to produce a political backlash, since it will inevitably encompass content from mainstream political groups.

Because extremist content encompasses such a broad range of material, the approach commonly taken with exploitative imagery—some sort of content filtering—won't work, according to the authors. (Given the Australian experience, it's somewhat surprising they think this sort of filtering is working, but the authors highlight BT's Cleanfeed project as a success.) Lots of extremist content is dynamic—it takes place on message boards and in chatrooms—and the rest can be easily moved or mirrored.

Blacklists would backfire

More generally, the authors argue that the sort of blacklists required for these systems will inevitably be discovered. When they are, because the topic is so high-profile, the list of banned sites will inevitably receive significant publicity, which is exactly the sort of thing that extremists need. The same thing goes, the report argues, for things like legal efforts to take down extremist content; the material is typically moved overseas, and gets a lot of publicity in the process.

The authors build a persuasive case that, given the mobile nature of the content itself, it makes far more sense to go after the people behind it, since it's harder to move people than content. This, they argue, has a significant legal advantage, too: if the people are truly dangerous and have made threats, then the law typically contains statutes that can be applied without being subject to the legal vagaries that arise with online content.

More generally, they contend that most extremist material is simply a way of preaching to the converted. The actual recruitment and indoctrination activities of most extremist groups takes place offline, with online material largely serving a maintenance role. In many cases, the authors argue, individuals wouldn't even know where to find online material if it weren't for their real-world contacts telling them where to look.

If this part of their prescription seems compelling, the remainder of their recommendations seem to vary in quality. The other two recommendations made by the ICSR involve government programs. The first would be to improve media literacy instruction in schools so that students are better equipped to evaluate the credibility of information and its sources online. That would clearly solve a lot of problems beyond extremist material, but rolling out a sufficiently rigorous educational program could take years.

Another solution offered by the report's authors would be for the government to provide funding for an independent group that would dole out money for counter-initiatives. Almost by definition, the authors argue, the government has little or no credibility with the target audience for extremists, and its association with a program can easily kill the intended message. Basically, the independent organization is intended to provide a layer of abstraction that would help independent organizations maintain the trust of the audiences they are trying to reach. The authors also emphasize that the group has to fund a broad range of efforts; funding efforts to counter radical Islamists without dealing with the UK's homegrown racist groups would also damage the organization's credibility.

Finally, they promote the formation of an "Internet Users Panel" that would ensure that ISPs take complaints about extremist content seriously. Although this might be preferable to the sort of scattershot, Andrew Cuomo-style censorship that has taken place in the US, the authors' use of the phrase "collective power of Internet users" makes it sound more like a pitch for a Web 2.0 company than a helpful method of limiting the flow of this material.

The ICSR report is interesting in that it's one of the few that has gone beyond describing the problem and has listed potential ways of tackling the spread of extremist content online. In essence, however, it concludes that the same features that make putting other material online—convenience and cost—will always make it an appealing media for extremists, which explains why its recommendations focus on offline solutions.