In the United States, aggregate data seem to imply that vaccination rates are stable. But this optimism may be shortsighted in today’s digital age, where younger populations—future vaccine decision makers, in some states—are becoming sensitized to vaccine misinformation online. For example, diseases such as measles have long been thought to spread in communities with insufficient “herd immunity”—i.e., not enough vaccinated people to prevent the spread of highly infectious disease. Herd immunity is no longer just a matter of quality public-health ecosystems, where vaccinations and antibiotics alone can prevent the spread of disease, but also of quality public-information ecosystems. We now know, for example, that social-media-based rumors made Ebola spread faster—and that when crisis responders adapted their communications strategies, more communities began receiving vital treatment and taking action toward prevention.

And yet our understanding of exactly how digital infections happen remains focused more on symptoms, looking at the number of shares a given vaccine-hesitancy tweet receives, than on some of the underlying causes, such as the digital infrastructure that makes some internet users more susceptible to encountering false information about immunization. Additionally, as the researchers Richard Carpiano and Nick Fitz have argued, “anti-vaxx” as a concept, describing a group or individual lacking confidence in evidence-based immunization practices, creates a stigma that focuses on the person—the parent as a decision maker or the unvaccinated child—and the community. More often, as Seymour has noted, the problem is rooted in the virality of the message and the environments in which it spreads.

Public-health authorities are not explicitly paying attention to the information ecosystem and how it may impact the spread of vaccine-preventable diseases in the near future. When 75 percent of Pinterest posts related to vaccines are discussing the false link between measles vaccines and autism, what does it mean for future herd immunity? And what about when state-sponsored disinformation campaigns exploit the vulnerabilities our systems have already created? Just last week, scientists at George Washington University found that a number of Russian bot and troll accounts on Twitter posted about vaccines 22 times more often than the average user.

To date, many public-health interventions seem to be addressing the outward signs of a misinfodemic by debunking myths and recommending that scientists collect more data and publish more papers. As well, much of the field remains focused on providing communications guidelines and engaging in traditional broadcast-diffusion strategies, but not search-engine optimization, viral marketing campaigns, and accessing populations through social-diffusion approaches. Research demonstrates that public-health digital outreach uses a lot of language and strategies that are inaccessible to the populations it is trying to target. This has created what the researchers Michael Golebiewski and danah boyd call “data voids”: search terms where “available relevant data is limited, non-existent, or deeply problematic.” In examining these environments, researchers such as Renée DiResta at Data for Democracy have documented the sorts of algorithmic rabbit holes that can lead someone into the depths of disturbing, anxiety-inducing, scientific-sounding (albeit unvalidated and potentially harmful) content that often profits from explanations with quick fixes at a cost.