It’s troubling enough that British teenager Molly Russell sought out images of suicide and self-harm online before she took her own life in 2017. But it was later discovered that these images were also being delivered to her, recommended by her favorite social media platforms. Her Instagram feed was full of them. Even in the months after her death, Pinterest continued to send her automated emails, its algorithms automatically recommending graphic images of self-harm, including a slashed thigh and cartoon of a young girl hanging. Her father has accused Instagram and Pinterest of helping to kill his 14-year-old daughter by allowing these graphic images on their platforms and pushing them into Molly’s feed.

WIRED OPINION ABOUT Dr. Ysabel Gerrard is a lecturer in digital media and society at the University of Sheffield. She researches social media content moderation and her work has been featured in outlets including BBC World Service, BBC Woman’s Hour, The Guardian, and The Telegraph. Tarleton Gillespie is a principal researcher at Microsoft Research and an associate professor at Cornell University. His latest book is Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media.

Molly’s father’s distressing discovery has fueled the argument that social media companies like Instagram and Pinterest are exacerbating a “mental health crisis” among young people. Social media may be a factor in the rise of a “suicide generation”: British teens who are committing suicide at twice the rate they were eight years ago. There have been calls for change in the wake of Molly Russell’s death. British health secretary Matt Hancock, for example, said social media companies need to “purge this content once and for all” and threatened to prosecute companies that fail to do so. In the face of this intense criticism, Instagram has banned “graphic self-harm images,” a step beyond their previous rule only against “glorifying” self-injury and suicide.

But simple bans do not in themselves deal with a more pernicious problem: Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it. And recommendation is a different animal than mere availability. A growing academic literature bears this out: Whether its self-harm, misinformation, terrorist recruitment, or conspiracy, platforms do more than make this content easily found—in important ways they help amplify it.

Our research has explored how content that promotes eating disorders gets recommended to Instagram, Pinterest, and Tumblr users. Despite clear rules against any content that promotes self-harm, and despite blocking specific hashtags to make that content harder to find, social media platforms continue to serve this content up algorithmically. Social media users receive recommendations—or, as Pinterest affectionately calls them, “things you might love”—intended to give them a personalized, supposedly more enjoyable experience. Search for home inspiration and soon the platform will populate your feed with pictures of paint samples and recommend amateur interior designers for you to follow. This also means that, the more a user seeks out accounts promoting eating disorders or posting images of self-harm, the more the platform learns about their interests and sends them further down that rabbit hole too.

Recommendations expose you to content you didn’t necessarily want to see—and more and more of it.

As Molly’s father found, these recommendation systems don’t discriminate. Social media shows you what you “might love,” whether you like it or not—even if it violates the platform’s own community guidelines. If you’re someone who seeks out graphic images of self-harm, or even if you just follow users who candidly talk about their depression, these recommendation systems will fill your feeds with suggestions, reshaping how you experience your own mental health. Recommendations expose you to content you didn’t necessarily want to see, and more and more of it; they can consume your Instagram Explore page, your Pinterest homepage, your Tumblr dashboard. Social media accounts can quickly become funhouse mirrors, not just reflecting your mental health back to you, but amplifying and distorting it.

Of course, if their prohibitions were perfect, then recommendations would include only the most acceptable content social media has to offer. Clearly this isn’t the case. It’s not for lack of trying. Content moderation is astoundingly difficult. The lines between the acceptable and the objectionable are always murky; untrained reviewers have just seconds to delineate between content that “promotes” self-harm and content that might aid in recovery; with thousands of new posts every day, day in and day out, something is sure to slip through. And self-harm is just one manifestation of mental illness. While Instagram might promise a clampdown on content depicting self-harm, others will remain.

And bans are not only imperfect, they can be harmful in and of themselves. Many users who struggle with self-harm or suicidal inclinations find immense emotional and practical support online. Social media can offer them a supportive community, valuable advice, and a sense of relief and acceptance. And these communities sometimes engage in the circulation of images that might shock others—as a testimony to someone’s pain, a badge of honor for having survived, a cry for help. Blanket prohibitions risk squeezing these communities out of existence.

The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.