Facebook, Microsoft, Twitter, and YouTube have announced that they will be working together to curb the dissemination of terrorist material online. The Web giants will create a shared industry database of hashes—digital fingerprints that can identify a specific file—for violent terrorist imagery and terrorist recruitment materials that have previously been removed from their platforms.

According to a statement the four companies have jointly released, the hope is that "this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online."

Once a hash has been added to the database, "other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate." Matching content will not be removed automatically, the statement says, and other online services will be encouraged to join the scheme.

Each participating company will "independently determine what image and video hashes to contribute to the shared database," but no details of how the scheme will work in practice have been provided. A likely model is Microsoft's PhotoDNA, which is used to combat online images of child sex abuse. Microsoft's system "compiles a digital signature of images, which can be matched against a database of known child pornography images."

However, there is an important difference between the two situations. Whereas child sex abuse is unambiguously illegal, and relatively clear-cut in its definition, it is much harder defining what exactly constitutes "violent terrorist imagery or terrorist recruitment videos or images." As a result, there is a risk that the new database will lead to censorship, where controversial but legal material is removed as a result of an overcautious approach.

The four companies claim to be aware that this is an issue, and say in their statement that "throughout this collaboration, we are committed to protecting our users’ privacy and their ability to express themselves freely and safely on our platforms."

Ars has asked the Open Rights Group for its comments on this point, but has not yet received a reply. This post will be updated when a response is received.

This latest move reflects a growing pressure on Internet companies from politicians around the world to remove material that is deemed illegal or harmful. Back in May, Facebook, Microsoft, Twitter, and YouTube announced that they had agreed with the European Commission a code of conduct on illegal online hate speech.

A few days ago, the EU's justice commissioner Vera Jourova said that the four were not doing enough to comply with the code, and she threatened to bring in new Europe-wide laws to address the problem unless they and other online services tried harder, according to Reuters. This newly announced database might well be part of an effort to head off that possibility.

Updated @ 6.44pm GMT, December 6: Jim Killock, Executive Director of the Open Rights Group told Ars in an e-mail: "Microsoft, Facebook, Twitter and YouTube are used by billions of people everyday so it's vital that they are transparent about any limitations and restrictions they place on content—even extremist content—in order to preserve free speech. Simple matches can't judge context so they will make mistakes."