Facebook has started using artificial intelligence to identify users who are potentially at risk of taking their own lives.

The social network has developed algorithms capable of scanning posts and comments for warning signs.

These could be phrases such as “Are you okay?” or “I’m worried about you”, or more general talk of sadness and pain.

The AI tool would send such posts to a human review team, which would get in touch with the user thought to be at risk and offer help, in the form of contact details for support services or a chat with a member of staff through Facebook Messenger.

Sow Ay illustrations on mental health Show all 18 1 /18 Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health Sow Ay illustrations on mental health

The site had previously relied on other users reporting worrying updates.

“The AI is actually more accurate than the reports that we get from people that are flagged as suicide and self injury,” Facebook product manager Vanessa Callison-Burch told BuzzFeed. “The people who have posted that content [that AI reports] are more likely to be sent resources of support versus people reporting to us.”

The system is currently being tested in the US.

The site has also announced new safety features for Facebook Live, which has been used to live stream several suicides.

Users can now flag up concerning Facebook Live behaviour with the site, which will display advice and highlight the video to staff for immediate review.

The goal is to provide help as quickly as possible, mid-broadcast rather than post-broadcast.

“Some might say we should cut off the stream of the video the moment there is a hint of somebody talking about suicide,” said Jennifer Guadagno, the project’s lead researcher.

“But what the experts emphasised was that cutting off the stream too early would remove the opportunity for people to reach out and offer support. So, this opens up the ability for friends and family to reach out to a person in distress at the time they may really need it the most.”

Facebook CEO Mark Zuckerberg described plans to use AI to identify worrying content in a recently published manifesto.

“Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” it read.