Keeping the internet, or at the very least social media, free from vile content is grueling work. The experience of content moderators for Facebook raises troubling questions about the future of human moderation and the wider danger that online content poses to public health.

Repeated exposure to conspiracy theories — say, that the earth is flat or that the Holocaust didn’t happen — turns out to sway content moderators, an effect that may very well be playing out in the population at large. Repeated exposure to images of violence and sexual exploitation often leaves moderators with post-traumatic stress disorder. Moderators have reported crying on the job or sleeping with guns by their side. Turnover is high, pay is low, and although they have access to on-site counselors, many moderators develop symptoms of PTSD after leaving the job.

We don’t know if moderators are canaries for the social-media-consuming public at large or if their heavy dose of the worst of the web makes them outliers. Is repeated exposure to conspiracy theories — often given boosts by recommendation algorithms — swaying the general public, in some cases leading to public health emergencies like the measles outbreak? Is extremist propaganda fueling a surge in right-wing violence?

The killer in New Zealand sought to hijack the attention of the internet, and the millions of uploads of his video — both attempted and achieved — were a natural consequence of what the platforms are designed to promote in users: the desire to make content go viral.

In the midst of the crisis this weekend, YouTube resorted to temporarily disabling the ability to search recently uploaded videos. It’s not the first time a platform has disabled a function of its product in response to tragedy. In July, WhatsApp limited message forwarding in India in the wake of lynchings fueled by rumors spread by users of the service. The change became global in January in an effort to fight “misinformation and rumors.”

It’s telling that the platforms must make themselves less functional in the interests of public safety. What happened this weekend gives an inkling of how intractable the problem may be. Internet platforms have been designed to monopolize human attention by any means necessary, and the content moderation machine is a flimsy check on a system that strives to overcome all forms of friction. The best outcome for the public now may be that Big Tech limits its own usability and reach, even if that comes at the cost of some profitability. Unfortunately, it’s also the outcome least likely to happen.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.