Chatbots are taking over the world. Over the past few years, virtual help agents have taken on surprisingly sensitive jobs in modern society: counseling Syrian refugees fleeing civil war, creating quiet spaces of contemplation for millions of Chinese living in densely populated cities, and helping Australians access national disability benefits. Bots have offered help, support, and companionship. But there’s one line none of them have yet crossed: actually treating patients.

That’s just changed, with the release of a talk therapy chatbot that goes by … wait for it … Woebot. Created by a team of Stanford psychologists and AI experts, Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health. After spending the last year building a beta and collecting clinical data, Woebot Labs Inc. just launched the full commercial product—a cheeky, personalized chatbot that checks on you once a day for the price of $39 a month.

Finding the time and money to pay for talk therapy sessions is out of reach for many, so a chatbot could be a helpful stopgap for psychiatry. But Woebot’s creators believe it has the potential to actually improve on human therapists. “It’s almost borderline illegal to say this in my profession, but there’s a lot of noise in human relationships,” says Alison Darcy, one of the psychologists behind Woebot, and the company’s CEO. “Noise is the fear of being judged. That’s what stigma really is.” There’s nothing like venting to an anonymous algorithm to lift that fear of judgement.

Of course, promising real medical results from a chatbot introduces new legal and ethical issues. While Woebot might seem like a person, it clearly tells patient that it’s actually closer to a “choose your own adventure self-help book.” Rather than running on machine learning technologies that would allow it to improvise on the fly, Woebot is much more deterministic. As it gathers mood data and processes any texts and emojis that a patient might enter, the bot traces the branches of a decision tree to offer personal responses and follow-ups for 10 minutes tops. Mostly, it asks questions.

“What is your energy like today?”

“How are you feeling?”

“What’s going on in your world right now?”

Those prompts are modeled on today’s most popular form of talk therapy—cognitive behavioral therapy—which asks people to recast their negative thoughts in a more objective light. Patients are encouraged to talk about their emotional responses to life events, and then stop to identify the psychological traps that cause their stress, anxiety, and depression. “A good CBT therapist should facilitate someone else’s process, not become a part of it,” says Darcy.

In some ways, a CBT chatbot is the ultimate manifestation of that philosophy. “Woebot is a robot you can tell anything to,” says Darcy. “It’s not an AI that’s going to tell you stuff you don’t know about yourself by detecting some magic you’re not even aware of.” Woebot only knows as much as you reveal to it—and it can only help as much as you decide to help yourself.

It’s not an unfounded concept. In 2014, Darpa funded a study of a virtual therapist named Ellie, an embodied avatar developed at the University of California’s Institute for Creative Technologies. Two groups of 239 participants talked to Ellie: One thought they were talking to a fully automated bot, and the other was told there was a real person behind the machine. In reality, all participants were randomly assigned a fully or semi-automated virtual human—but the participants who thought they were talking to a robot were way more likely to open up and reveal their deepest and darkest secrets. Removing even the idea of a human in the room led to more productive sessions.

Ellie was a research project, not a commercially available product, but it did provide some of the strongest proof that computers can actually make great therapists. And there’s evidence that removing the “talk” from talk therapy seems to help, too. Scientists who recently looked at text-chat as a supplement to videoconferencing therapy sessions observed that the texting option actually reduced interpersonal anxiety, allowing patients to more fully disclose and discuss issues shrouded in shame, guilt, and embarrassment.

Recognizing the value—both therapeutic and monetary—some mental health care startups are incorporating texting into treatment. One, called Therachat, sells a customizable chatbot that therapists can use to keep their patients engaged. It gives the doctor a full record of the chats, along with an analysis of frequently used positive and negative words. X2AI, the company that deployed its Arabic-speaking bot, Karim, to Syria in the spring of 2016, has a polylingual portfolio of chatbots to help people with everything from mild anxieties to pediatric diabetes.

Clinical Chatbot

X2AI describes its bots as therapeutic assistants, meaning they offer help and support rather than treatment. For the most part, these bots are in a supportive role—more tool than therapy. In this way, Woebot is different. It’s billed as a treatment in its own right, an accessible option for those who have no sort of care for their struggles with mental health. Darcy sees it kind of like a “gateway therapy,” to give people a good first experience, and even help them realize when they need a more intense form of intervention.

Woebot is obviously not a licensed physician, and it doesn’t make diagnoses or write scrips. It’s not equipped to deal with real mental health crises either. When it senses someone is in trouble it suggests they seek help in the real world and provides text and hotline resources.

But Darcy says her data supports the claim that chatting to Woebot is in fact a therapeutic experience. Yesterday, Darcy and a team of co-authors at Stanford published a peer-reviewed study in the Journal of Medical Internet Research, Mental Health that randomized 70 college students and asked them to engage with Woebot or a self-help e-book for two weeks. The students who used Woebot self-reported a significant reduction in their symptoms of depression and anxiety.

From an experimental design standpoint, it’s far from perfect. Self-report is notoriously unreliable. And the control group isn’t ideal: A better comparison would be between Woebot and text messaging with a human therapist, at least according to Steven Chan, who worked with the American Psychiatric Association to create a set of guidelines for mental health apps before becoming UCSF’s first fellow of clinical informatics. “If the point it’s trying to make is that it’s better than nothing, then it’s a good first step which shows a lot of potential,” he says. Beyond that, there’s not much we can tell about Woebot’s therapeutic value over the long term.

Being the only therapy chatbot with peer-reviewed clinical data to back it up separates Woebot from the pack. But using those results to claim it can significantly reduce depression may expose Woebot to legal liabilities that bots in supporting roles have managed to avoid. Without moral agency, autonomous code can't be found guilty of any criminal acts. But if it causes harm, it could be subject to civil laws governing product liability. Most manufacturers deal with those risks by putting labels on their products warning of possible hazards; Woebot has a somewhat synonymous disclaimer that states people shouldn't use it as a replacement for getting help.

There’s one other big issue with Woebot in its current incarnation: It only talks to you through Facebook Messenger. Facebook’s services aren’t HIPAA-compliant, but in this case that wouldn’t matter anyway. Because Woebot isn’t a licensed medical provider, any conversations with it aren’t protected by medical data privacy and security law in the first place. While Darcy’s team has built a wall on their end to keep all of Woebot’s users anonymous, Facebook knows exactly who you are. And Facebook, not you or Woebot, owns all your conversations.

That’s why Darcy’s team is trying to raise funds to build standalone apps outside of the Facebook universe. But at least for now, the privacy concerns haven’t prevented people from signing up.

Woebot has about 150 long-term beta users who say they like the ease of checking in quickly on Facebook. Chan says that jives with a trend he’s seen in real life: Patients are demanding their doctors be available to them at all hours via text message, a communication channel that's far from secure. “It’s kind of funny,” he says. “If people get the sense that it’s safe then they’ll disclose anything. Their desire to reach somebody overrides those privacy concerns because they’re much more intangible and ephemeral.” And for Woebot's users, an intangible and ephemeral listener may be just what they need.