Amid the tense debate over online political advertising, it may seem strange to worry that Facebook gives campaigns too little control over whom their ads target. Yet that’s the implication of a study released this week by a team of researchers at Northeastern University, the University of Southern California, and the progressive nonprofit Upturn. By moonlighting as political advertisers, they found that Facebook’s algorithms make it harder and more expensive for a campaign to get its message in front of users who don’t already agree with them—even if they’re trying to.

Social media is well on its way to supplanting television as the dominant platform for campaign spending. The study notes that online spending is projected to make up 28 percent of all political marketing in the 2020 elections, up from 20 percent just last year. The optimistic take is that this shift is helping campaigns, especially smaller ones, to more efficiently get their messages to the right voters. But the new study suggests that there are some limits to that account. By optimizing for what it defines as “relevance,” Facebook puts its thumb on the scale in favor of a certain kind of political communication, the kind that focuses on engaging with people who are already on your side. Polarization, in other words, is part of the business model.

“If you are ever trying to reach across party lines, that’s a much more difficult strategy on Facebook,” said Aaron Rieke, managing director at Upturn and one of the study’s authors.

The paper, still in draft form, is a follow-up to research the group did earlier this year, which found that Facebook’s algorithms can dramatically skew the delivery of ads along racial and gender lines even when the advertiser doesn’t intend it. That’s because while Facebook allows advertisers to design their audience—that’s ad targeting—the platform’s algorithms then influence who within the audience actually sees the ad, and at what price. That’s ad delivery. Because Facebook wants users to see ads that are “relevant” to them, the algorithm essentially pushes a given ad toward users it thinks are most likely already interested in its message. This, the researchers found, can reinforce stereotypes. For example, of the users who saw ads for jobs in the lumber business, 90 percent were male, even though the intended audience was evenly split between men and women. (Facebook is also facing litigation for allegedly allowing advertisers to intentionally discriminate.)

For the new study, the team decided to explore whether the algorithm also skews political ad delivery along partisan lines. Because the company doesn’t share that information, they had to run a number of experiments, essentially going undercover to figure out where targeting ends and Facebook’s algorithms begin.

The basic setup of the experiments was simple: Over the summer, the researchers bought ads promoting either Donald Trump or Bernie Sanders, and targeted both sets of ads simultaneously at groups of American users. If the only thing affecting who saw the ads was their targeting parameters, the researchers hypothesized, then liberal and conservative Facebook users would see both ads at about the same rate. But if Trump ads disproportionately went to conservatives and Sanders ads to liberals, that would mean Facebook’s algorithm was putting a thumb on the scale. (Why Sanders? Because at the time of the experiment, his campaign was the biggest Democratic spender on Facebook, which the researchers hoped would minimize the risk of the study actually influencing the election.)

Facebook infers our political interests from our behavior on (and off!) the platform and allows advertisers to target us accordingly. It’s hard to measure how that actually plays out, however, because the company doesn’t let advertisers see the political leaning of the people who ultimately see or click an ad. So the researchers came up with a workaround, based on the fact that Facebook does let advertisers track ad impressions by location. They built separate, similar-sized audiences of liberals and conservatives in the same area. Then they targeted them with Trump and Sanders ads simultaneously. (They also showed a “neutral” ad encouraging them to register to vote.) That meant liberals and conservatives were being targeted by the same messages, at the same time, in the same place. The key was to target them with separate but identical ad buys. So, for example, when the team targeted users in Charlotte, North Carolina, they made two transactions with Facebook: one for the liberal audience and one for the conservative audience. That allowed them to keep track of everyone’s partisanship to see whether Facebook was skewing ad delivery by political affiliation.