Four largest ISPs have agreed to implement a system of blocks similar to that used to keep child abuse material off web

British internet service providers have been accused of rushing into an ill-thought-out attempt to block political material online, after agreeing with the government on a system of filters for websites espousing extremist views.

The four largest ISPs have independently agreed with the government to implement a system of blocks, similar to that used to keep child abuse material off the net. But civil liberties campaigners expressed fears that the move opened up a risk of political censorship.

Jim Killock, executive director of the Open Rights Group, said: “We need transparency whenever political content is blocked, even when we are talking about websites that espouse extremist views. The government must be clear about what sites they think should be blocked, why they are blocking them and whether there will be redress for site owners who believe that their website has been blocked incorrectly.”

Downing Street’s proposal to the four ISPs – BT, Virgin, Sky and TalkTalk – is for a public reporting process along the lines of that already implemented to protect children from being exploited online. The Child Exploitation and Online Protection (Ceop) agency lets internet users report child abuse material directly to the police, who can then decide to act immediately if a child is in danger or to pass the information on to the Internet Watch Foundation (IWF), an independent group.

The Metropolitan police’s counter-terrorism internet referral unit (Citru) would play the equivalent role of Ceop under the proposal.

A BT spokesperson said: “We have had productive dialogue with government about addressing the issue of extremist content online and we are working through the technical details.”

An ISP industry insider admitted to the Guardian that Ceop buttons supposed to provide a hotline to the service for web users had faded from view on most of the major ISP’s websites. They said social networks such as Facebook and Twitter were the new focus of attention, given the proportion of both child abuse and extremist material that is sourced there.

The Open Rights Group said that however extremism was reported – whether via a button on ISPs websites or a form on a social network – the important issue was what happens afterwards. “The issue is not about reporting extremism but whether reported content is added to secret lists of blacklisted sites,” Killock said.

Another tech company insider said they would be “very wary of any government attempts to blur the line between child protection and extremist content”. Much automatic filtering of child abuse images happens by matching pictures and video to known databases, a concept that does not work when dealing with “extremist” messages.

“Anyone who’s ever used a search engine will understand: computers do not understand sarcasm, they don’t understand irony,” the source said. “You can’t automate understanding human context around words.”

Downing Street said it would continue to press internet companies to be more proactive in fighting the use of web technologies by extremist groups.