BRUSSELS, BELGIUM—Over the last two years, CleanIT , the European Commission-funded project group that aims to “reduce terrorist use of the Internet,” has met on a regular basis to come up with a set of voluntary general principles to achieve that goal. Earlier this month, the group published its “ final report ,” in which it called for a “flag this as terrorism” content button in Web browsers.

On Wednesday, I moderated the final symposium—my travel and lodging was paid for out of the project’s €326,000 ($442,000) budget. An assembled crowd (including a few Ars readers) of around 50 people came together at a European Parliament-adjacent hotel in downtown Brussels to hear various speakers from government, academia, and civil society across Europe.

After hours of talks, it remained vague if or how CleanIT would move forward now that its two-year funding had ended. Most speakers seemed primarily interested in thanking the Dutch officials leading the effort and in presenting more questions than answers. Whether or not the project’s voluntary “best practices” would ever actually take effect was anyone’s guess. The best practices include a (possibly browser-based) “user-friendly flagging system” and an “effective and efficient notice and take action procedure.”

Despite all the talk of “Internet companies [that] state clearly in their terms and conditions that they will not tolerate terrorist use of the Internet on their platforms, and how they define terrorism,” as stated in the final conference document (PDF), there were few representatives from the private sector. Based on a show of hands, only six people belonged to that category, with no participants from a search engine or a social network. There was one person who worked for a private, academic ISP.

At the beginning of the talk I reminded the crowd of my previous Ars pieces from 2012, in which I referred to the project as “quixotic” and “ridiculous.” One person even heckled me: “Why are you wasting our time?” I responded that I had been invited by CleanIT officials, and I was aiming to present a provocative and skeptical view. That view, however, did not seem to be shared by many in the room.

Respecting the rule of law?

Klaasen, the Dutch counter-terrorism official in charge of the project, lauded its participatory scope, noting the project had “110 different participants from 15 countries in six meetings” over the course of two years, including a 37 percent participation rate from the commercial sector. However, he had a striking explanation as to why they didn’t show up at the group’s only public, on-the-record meeting.

“We invited them of course,” he explained at the end of the conference. “They didn't show up today in the same amount as the last meetings,” noting that I, as one of the sole reporters in the room, was “disappointed” in their absence.

He said that as profit-seeking organizations, most small companies didn’t have the funds to send representatives to Brussels, while larger companies had a different issue. (Apparently CleanIT has no qualms about spending more than $1,000 to bring an American journalist from California to the meeting, but it can’t support the attendance of Europe-based tech companies.)

“A larger company is more afraid of its public image,” he said. “It's not likely that a company wants to associate [its] name with counter-terrorism. That's the main reason why we created an environment of trust. That's why we couldn't be fully transparent. We operated with the Chatham House Rules. Within these conditions they were much more cooperative than they were now. This is a public event. You are free to write anything you like about this, but I can understand that companies have more problems and are hesitating in this kind of public setting.”

In addition to Klaasen and the other Dutch officials behind the project, seven panelists attended from government and civil society from the Netherlands, Switzerland, the United Kingdom, Germany, and Ireland. The Dutch officials included Theo Bot, the Dutch deputy national coordinator for Counter-Terrorism and Security, and a pre-recorded video from Gilles de Kerchove, the EU Counter-Terrorism Coordinator.

Nearly all of the panelists repeated trite slogans and accepted truths they seemed to be unable to articulate in practice. When I asked the panel how, in their understanding of the voluntary guidelines articulated in the final document, European ISPs would deal with an actual terrorist website presently hosted in Hungary (an EU member state), no one had a real answer.

Again and again, participants seemed to present an untenable position: they would respect existing European human rights and freedom of expression laws but at the same time would assure that “illegal” terrorist websites would disappear from the European Internet.

“Our position is that states have the obligation to protect their citizens from crime, and terrorism is a crime,” said panelist Anna Tsitsina, the adviser to the terrorism division at the Council of Europe.

“States must [remove such content] while fully respecting human rights and the rule of law. That means that no limitations on legitimate political speech—the same standards that apply offline, apply online.”

Panelist Michael Whine, the director of Government and International Affairs at a UK-based anti-Semitism prevention group, even went so far as saying that law enforcement agencies would prefer to leave terrorist websites active “for intelligence reasons.”

"It won’t lead to an acceptable result"

The project has certainly had criticism from within the European Parliament—most notably from a prominent member of the informal group of members that work on digital policy issues, Jan-Phillip Albrecht, a 30-year-old Member of the European Parliament (MEP) from Germany.

One of his main criticisms of the project was that it seems to circumvent the entire legislative process while outsourcing the enforcement of “illegal content” removal to the private sector, rather than providing the explicit backing of new laws.

“People are trying to introduce [policies] not legislatively, but voluntarily, so the providers can introduce ill-fitting measures without any legislative framework, and that's highly disputed,” he told Ars. “It's in direct conflict with the human rights in Europe and many rule of law principles. If you want to interfere with fundamental rights with those measures, you need to have a legal procedure for that.”

He added that even as a voluntary measure, he was 100 percent certain the premise of CleanIT violated the Charter of Fundamental Rights of the European Union, Article 11, which states, “Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

Beyond that, Article 52, which does allow free speech limits to be legally imposed, notes those restrictions must be “provided for by law and respect the essence of those rights and freedoms.” Again, CleanIT doesn't want to create new laws, but rather voluntary guidelines which would be trivial for a non-scrupulous ISP or Web host to circumvent.

“[CleanIT] cannot just ask private people to do their work,” he added. “It's a move to be seen through the last decade of privatization of law enforcement.”

Albrecht also noted the European Commission—the EU’s executive body—was “spending money without something helpful, and it won’t lead to an acceptable result.”

Now that the project's funding has run out, will it continue? Will Europeans start seeing terrorism-warning buttons built directly into their browsers? It still seems highly unlikely.