Google is one of the most advanced search and advertising platforms on the Internet, but a research paper suggests the company may lack the ability to keep discriminatory and privacy policy-violating advertisements off its services.

Research conducted by three computer scientists from Carnegie Mellon University and the International Computer Science Institute discovered that Google's AdSense platform is capable of discriminating against women looking for employment and targeting consumers based on their health information.

Using an automated tool they built called AdFisher, the research team utilized more than 17,000 simulated user profiles across 21 experiments to analyze how different user traits defined by Google's Ad Settings would impact which ads were served. In one experiment, Google predominantly showed ads for executive-level positions to accounts identified as male. Female accounts, on the other hand, were more likely to be served job postings from an auto parts dealer, Goodwill, and a generic job-hunting service.

In another experiment, ads for drug and alcohol rehabilitation centers were served to accounts which previously browsed websites about substance abuse. Similarly, accounts that visited websites regarding physical disabilities were shown ads for accessibility products.

"We cannot claim that Google has violated its policies," the team wrote in the paper. "In fact, we consider it more likely that Google has lost control over its massive, automated advertising system."

Who—or What's—to Blame?

While the study's findings would suggest Google is enabling discrimination, the situation is much more complicated.

Currently, Google allows advertisers to target their ads based on gender. That means it’s possible for an advertiser promoting high-paying job listings to directly target men. However, Google's algorithm may have also determined that men are more relevant for the position and made the decision on its own. And then there's the possibility that user behavior taught Google to serve ads in this manner. It’s impossible to know if one party here is to blame or if it’s a combination of account targeting from all sources at play.

"Users can train [Google's] models to act in a discriminatory fashion," study co-author Michael Tschantz told WIRED. "If only males are clicking on the ad that promote high-paying jobs, the algorithm will learn to only show those ads to males. Machine learning algorithms produce very opaque models that are very hard for humans to understand. It's extremely difficult to determine exactly why something is being shown."

It’s also problematic that Google lacks clear standards for when advertisers can target users based on "sensitive information," which further muddles whether any of this is okay or not.

The researchers believe that the ads shown for rehabilitation centers and accessibility products could be a result of "remarketing." Google permits companies to target users who have previously visited their sites, prompting those users to return in order to complete a purchase. However, Google's advertising privacy policy "[prohibits] advertisers from remarketing based on sensitive information, such as health information or religious beliefs." (Google did not respond to a request for comment.)

That policy is enough for the team to conclude the health ads were being served illicitly. "Although Google does not specify what [it] consider[s] to be 'health information,’ we view the ads as in violation of Google’s policy, thereby raising the question of how Google should enforce its policies."

Too Big to Control

By some estimates, Google controls more than 31 percent of the digital ad market. The staggering scale of its operation has made it nearly impossible to monitor all the ads published through its platform.

"It is definitely possible for advertisers to violate Google's Terms and Conditions and privacy policies," says Tschantz. "They are not doing anything to check ads for compliance. Google does simple technical checks for style issues—stuff like too many exclamation points or to make sure the ad's link is active—but there is nothing in place to check for semantic properties, like an ad being discriminatory."

One of Tschantz's research partner's, Anupam Datta, suggests Google may be kicking responsibility down to the advertisers.

"Google's policies say [users] should not be doing anything illegal," Datta says. "They have assigned some responsibility to the advertisers to do the right thing."

Consumer privacy advocates worry that, permitted or not, these advertisements could already be impacting users.

"Our computers are mirrors as well as windows, and the personalization that we encounter across the Web sends signals about our value and what opportunities are available to us," said Ali Lange, a consumer privacy policy analyst for the Center for Democracy & Technology. "So what signals are sent by ads that are delivered based on potentially sensitive information, like ads for rehab?"

Datta believes it’s possible to develop more advanced oversight tools that companies can use internally that can detect discriminatory and other tracking abuses, as well as help assign responsibility when abuses occur. The team is already in the process of working with Microsoft to automate advertisement compliance checking. Microsoft is particularly concerned about how discriminatory ads could be placed on its Bing search engine.

Machine-learned discrimination may prove to be a difficult problem for companies like Google and Facebook to stomp out. But the researchers agree that algorithmic discrimination can't be ignored. And AdSense isn’t the singular platform responsible for such bias: Recently, Google’s new photo service, which uses its filtering smarts to identify the contents of a photo, mistook photos of black people for gorillas. Flickr’s similarly smart photo engine labeled a black man as an “ape” and an “animal,” and called a Nazi concentration camp a “jungle gym.” Just because the creator of such offensive statements is a machine doesn’t mean it should run uninhibited through the Internet.

As the researchers put it: "The amoral status of an algorithm does not negate its effects on society.”

UPDATE 3:41pm ET 7/8/2015: Google offered the following statement after publishing this story:

"Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with 'Why This Ad' notices and Ad Settings, as well as the ability to opt out of interest-based ads."