Facebook is using pattern-recognition technology to identify content that could be indicative of suicidal tendencies.

It will look for comments such as “Are you OK?” and “Can I help?”

The software is being rolled out globally, except for in the European Union.

Facebook is rolling out artificial-intelligence technology to help it detect posts, videos, and Facebook Live streams that contain suicidal thoughts, it announced on Monday.

The company is deploying the “proactive detection” technology globally after a trial on text-based posts in the US, which it announced in March. However, there is one rather large exception: the European Union, where data-privacy laws make it tricky.

“We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live,” Guy Rosen, Facebook’s vice president of product management, said in a blog post. “This will eventually be available worldwide, except the EU.”

Rosen continued: “This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide. We continue to work on this technology to increase accuracy and avoid false positives before our team reviews.

“We use signals like the text used in the post and comments (for example, comments like ‘Are you OK?’ and ‘Can I help?’ can be strong indicators). In some instances, we have found that the technology has identified videos that may have gone unreported.”

Lees ook op Business Insider Winkelketens Expresso en Claudia Sträter binnen maand na doorstart weer doorverkocht

Foto: source Facebook

The social-media giant already allows users to report friends who they think might be at risk, but using AI could help the company to spot suicidal tendencies earlier.

Facebook says it’s also improving how it identifies and contacts the appropriate first responders – such as police, fire departments, or medical services – when it identifies someone at risk.

Within the so-called community operations team – made up of people who review reports about content on Facebook – is a dedicated group focused on suicide and self-harm. Facebook says it’s using AI to prioritise the order that posts, videos, and livestreams are reviewed to get first responders to the people who need them most.

Facebook may also contact users it believes are at risk (and their friends) via Facebook Messenger with links to relevant pages, such as the National Suicide Prevention Lifeline and the Crisis Text Line.

Mark Zuckerberg, the cofounder and CEO of Facebook, said on his Facebook page on Monday that the technology was designed to help Facebook to save lives:

“Here’s a good use of AI: helping prevent suicide.

“Starting today we’re upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly. In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times.

“With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today.

“There’s a lot more we can do to improve this further. Today, these AI tools mostly use pattern recognition to identify signals – like comments asking if someone is okay – and then quickly report them to our teams working 24/7 around the world to get people help within minutes. In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.