When all you have is a hammer, and you’re Facebook, what could possibly go wrong?

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

By “send help,” Facebook means call the cops. Facebook’s hammer is artificial intelligence. The cops’ hammer is deadly weapons. The option of sending “mental health resources” is easier said than done, as there aren’t any for the most part, and “local first-responders” tend not to be the local suicide hotline roadshow. They tend to be the cops.

But all of this raises the question: how will Facebook’s AI know you, and know you well enough, to detect “patterns of suicidal thoughts”? If your friends, your family, don’t see issues, is Facebook up to the task?

They’ve already called the cops more than 100 times on their users, with the best of intentions.

Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

That Facebook is concerned for the welfare of their users is thoughtful, but couching a visit from the local police as a wellness-check doesn’t change the fact that they’re cops. Assuming the AI was remotely accurate*, this could lead to suicide by cop or, far worse, the police reacting defensively with someone suffering from mental illness. Stories are legion of cops killing the mentally ill when they claim to feel threatened. I would tell you to ask Eleanor Bumpurs, but you can’t because the cops killed her.

But what if someone is venting on Facebook, using it as a catharsis to get out their feelings. Is that not what the place is for? Does that mean they risk the cops knocking? Their neighbors will see, and rumors will swirl about the crazy person in the house across the street. They will be embarrassed. Parents will ask questions. Therapists will send them 50% off coupons.

And then there’s the fact that Facebook is scanning people’s posts in the first place.

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

What are the chances dystopia will come with a trigger warning, as opposed to being couched in warm and fuzzy words reflecting the best of intentions? If they’re scanning for thoughtful and positive reasons, they’re going to find other things as well. Since it’s AI, one can never be quite certain of how the algorithm was written, how it will interpret content and the ability of millions of people to express themselves. Right, lawyer dog?

The potential for disaster, on the one hand, and embarrassment on the other is huge. The imposition of Facebook’s good intentions on their users’ privacy, however, may prove to be the most pervasive issue here. People (not me, mind you, but other people) use Facebook to communicate to their “friends.” It’s fun. It’s cool. And most users don’t think of some boiler room in Bangalore reviewing their “problematic” posts for potential issues.

Our Community Operations team includes thousands of people around the world who review reports about content on Facebook. The team includes a dedicated group of specialists who have specific training in suicide and self harm.

Feel better now?

We are also using artificial intelligence to prioritize the order in which our team reviews reported posts, videos and live streams. This ensures we can get the right resources to people in distress and, where appropriate, we can more quickly alert first responders.

What about now?

Context is critical for our review teams, so we have developed ways to enhance our tools to get people help as quickly as possible. For example, our reviewers can quickly identify which points within a video receive increased levels of comments, reactions and reports from people on Facebook. Tools like these help reviewers understand whether someone may be in distress and get them help.

In addition to those tools, we’re using automation so the team can more quickly access the appropriate first responders’ contact information.

It’s unclear whether your Facebook posts are being read by a dedicated review team, which has “specific training,” whatever that means, or AI. And it’s unclear what will be the trigger that brings the cops to your door. But regardless, do you really want Facebook calling the cops on you, even if it’s called a “wellness check” by guys who are locked and loaded?

Whenever someone commits suicide, there will invariably be calls to question how no one noticed the problem so that the person could have been saved. The same happens when the cops show up at someone’s home and kills them under the Reasonably Scared Cop Rule. It’s bad enough that this happens no matter how hard we try to prevent someone being harmed, but the Zuck won’t be liable should the call be made by Facebook that ends with a bullet in someone’s head.

*As of now, AI pattern recognition is basic junk science.

Cookie-cutter ratios, even if scientifically derived, do more harm than good. Every person is different. Engagement is an individual and unique phenomenon. We are not widgets, nor do we conform to widget formulas.

Does junk science with good intentions make it acceptable?

H/T MassPrivatel