How does a ready-made toolbox for digital manipulation already exist? For that, we have the digital-advertising industry to thank.

In a recent study on the digital-advertising industry that we published with New America and Harvard’s Shorenstein Center, we analyzed how the tools of digital marketing can be readily repurposed by agents of disinformation. The basic idea is for advertisers to micro-target digital advertising at very specific demographic slices of social-media users to see how they respond. A disinformation operator could test hundreds of different messages, often aimed at thousands of different permutations of demographic groups on the advertising platforms of the most widely used social-media companies.

For example: A political advertiser (or communicator) might test a message about immigration in different cities across the country, or it might compare responses to that message based on age, income, ethnicity, education-level, or political preference. Because digital-media companies like Facebook collect vast amounts of data on their users, advertisers can parse based on age, income, ethnicity, political affiliation, location, education level, and many other consumer preferences that indicate political interests. Once the ad buys indicate what messages get the biggest response from particular groups, the operator can organize its entire social-media campaign to reach those people and build out bigger and bigger audiences.

This is digital marketing 101. Start with a product to sell and test a variety of messages until the best one rises to the surface.

In the election-interference case, the “products” for Russian trolls were divisive political messages about issues like, say, religion. But just as with any other product, the ads ginning up fear and outrage about Islam in America benefited from Google and Facebook’s machine-learning algorithms, which scan vast amounts of data and conduct tests on multitudes of political messages to determine the best way to find and engage an audience. Everybody makes more money if the ads work well—that is to say, if people click on them. The economic interests of advertisers and social media companies are essentially aligned. And while Facebook, Google, and Twitter are now taking steps to identify and block ads purchased by foreign agents and shut down these attempts to push fabricated news, the underlying machine of the ad tech market will, theoretically, accelerate users’ consumption of all but the most egregious content.

When political advertisers—including purveyors of disinformation—get into the mix, the economics of audience segmentation and micro-targeted advertising start to produce what is known as a “negative externality” in the market, or an unintended outcome that harms the public. The system naturally organizes people into homogenous groups and feeds them more of what they want—typically, information that reinforces their pre-existing beliefs—and then ups the sensation-factor in order to hold people’s interest for longer stretches of time.