BRUSSELS (Reuters) - Removing extremist content from the internet within a few hours of it appearing poses “an enormous technological and scientific challenge”, Google’s general counsel will say later on Wednesday to European leaders who want it taken down quicker.

An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. REUTERS/Kacper Pempel

Kent Walker, general counsel for Alphabet Inc’s Google, will speak on behalf of technology companies Facebook, Microsoft, Twitter and YouTube at an event on the sidelines of the annual gathering of world leaders at the United Nations.

The leaders of France, Britain and Italy want to push social media companies to remove “terrorist content” from the internet within one to two hours of it appearing because they say that is the period when most material is spread.

“We are making significant progress, but removing all of this content within a few hours - or indeed stopping it from appearing on the internet in the first place - poses an enormous technological and scientific challenge,” Walker will say in a speech on behalf of the Global Internet Forum to Counter Terrorism, a working group formed by the four companies to combine their efforts to remove extremist content.

Tech firms have come under increasing pressure from governments in the United States and Europe to do more to keep extremist content off their platforms after a spate of militant attacks, and the European Union is mulling legislation on the issue.

“There is no silver bullet when it comes to finding and removing this content, but we’re getting much better,” Walker will say.

“Of course finding problematic material in the first place often requires not just thousands of human hours but, more fundamentally, continuing advances in engineering and computer science research. The haystacks are unimaginably large and the needles are both very small and constantly changing.”

Walker will say the companies need human reviewers to help distinguish legitimate material such as news coverage from the problematic material and train machine-learning tools against “ever-changing examples.”

The companies last year decided to set up a joint database to share unique digital fingerprints they automatically assign to videos or photos of extremist content, known as “hashes”, to help each other detect and remove similar content.

Facebook used a hash that contained a link to bomb-making instructions to find and remove almost 100 copies of that content.

Twitter said on Tuesday that it had removed 299,649 accounts in the first half of this year for the “promotion of terrorism”, while Facebook has ramped up its use of artificial intelligence to map out pages and posts with terrorist material.