Read: When algorithms don’t account for civil rights

Before settling these cases, Facebook argued that it was immune to liability under antidiscrimination laws because of the broad protection granted under Section 230 of the Communications Decency Act. This provision, enacted in order to protect free and open expression on a nascent internet, shields tech platforms from liability arising from their users’ content. In essence, Facebook argued that its advertisers were entirely to blame for any discriminatory outcomes.

But that argument may be wearing thin. Even when advertisers do nothing wrong, Facebook can still perpetuate discrimination in housing, credit, and employment in deeper and more systematic ways. After an advertiser chooses its target audience, Facebook then makes decisions about which of those users will actually see that ad. It’s in those decisions—made automatically by Facebook, millions of times a day—where discrimination can quietly creep back in.

A recent study led by researchers at Northeastern University and the University of Southern California shows that, given a large group of people who might be eligible to see an advertisement, Facebook will pick among them based on its own profit-maximizing calculations, sometimes serving ads to audiences that are skewed heavily by race and gender. (Full disclosure: One of us was a member of the research team.) In these experiments, Facebook delivered ads for jobs in the lumber industry to an audience that was approximately 70 percent white and 90 percent men, and supermarket-cashier positions to an audience of approximately 85 percent women. Home-sale ads, meanwhile, were delivered to approximately 75 percent white users, while ads for rentals were shown to a more racially balanced group. These are limited experiments, yet to be replicated, but they demonstrate a distressing trend.

The study’s results show digital advertising working exactly as designed—and exactly in ways that can perpetuate the types of harms that civil-rights laws are meant to address. Simply put, ad platforms such as Facebook make money when people click on ads. But an individual’s tendency to click on certain types of ads (and not others) often reflects deep-seated social inequities: the neighborhood they live in, where they went to school, how much money they have. An ad system that is designed to maximize clicks, and to maximize profits for Facebook, will naturally reinforce these social inequities and so serve as a barrier to equal opportunity.

Read: Is this how discrimination ends?

These dynamics are a perfect illustration of why the “disparate impact” doctrine—a bedrock principle of civil-rights law—is such an important tool in the era of algorithms. Under disparate impact, even unintentional actions can amount to illegal discrimination if they have an adverse impact on protected groups. Without this doctrine, opaque, machine-driven predictions are effectively above the law, as long as they don’t directly consider data indicating that a user belongs to a protected class.