Facebook is considering using on-device AI algorithms to scan and moderate content in its WhatsApp messaging service to enforce its acceptable speech policy.

If implemented, the app itself would automatically scan messages prior to their being encrypted and sent.

Experts warn, however, that such a setup would require WhatsApp to transmit prohibited messages to developers in order to improve the AI's training.

Furthermore, concerns have been raised that the development could pave the way for governments to force social media firms to spy on user messages for them.

Scroll down for video

Facebook is considering using on-device AI algorithms to scan and moderate content in its WhatsApp messaging service to enforce its acceptable speech policy

Facebook revealed its plans to transfer content moderation from human-staffed data centres to on-device, AI-powered systems in a presentation at the firm's F8 annual developer conference on May 1, 2019.

The proposed concept would appear to see content moderation executed on user messages directly within WhatsApp, prior to their encryption — with the filtering algorithms themselves being regularly updated from a central source.

In this way, Facebook would be able to prevent users from sharing content that violates the firm's acceptable speech guidelines, ostensibly without compromising the application of end-to-end encryption within the messaging service.

However, security experts have warned that the move to introduce on-app content moderation is tantamount to creating a backdoor within the device.

According to Forbes, the ongoing development and training of such content moderating algorithms would necessitate the app transmitting samples of prohibited, unencrypted messages back to Facebook for analysis.

This step could just be the beginning, however, some commentators caution.

'Once this is in place, it is easy for the government to demand that Facebook add another filter — one that searches for communications that they care about — and alert them when it gets triggered,' security expert Bruce Schneier said.

These extra filters, if implemented, would lie outside of the app's encryption processes — bypassing a barrier that has infuriated many law enforcement agencies.

'Of course, alternatives like Signal will exist for those who don't want to be subject to Facebook's content moderation, but what happens when this filtering technology is built into operating systems?' Schneier added.

Such a development would make these systems impossible to escape and would allow for the scanning of all apps — including those like Signal, rendering such encryption services futile.

Concerns have been raised that the development could pave the way for governments to force social media firms to spy on user messages for them

Although model phone and operating system manufacturers could refuse to deliver on such features, there is the possibility for governments to make them mandatory.

This, Forbes contributor Kalev Leetaru argues, would 'effectively [end] the era of encrypted communications.'

According to Forbes, WhatsApp parent company Facebook did not dispute that it was intending to use algorithms to moderate end-to-end encrypted messages on platforms such as WhatsApp.

In contradiction to this message, however, WhatsApp vice president Will Cathcart told Forbes that they 'have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it.'

'We understand the serious concerns this type of approach would raise which is why we are opposed to it.'

It is unclear exactly if and when an updated version of WhatsApp containing the content moderation system might be deployed to user's devices.

MailOnline have reached out to Facebook for comment on the plans.