Ad-targeting algorithms created by Google could be guilty of discriminating between internet users, say researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI). A study that used custom software named AdFisher to simulate the browsing activities of web users found that when visiting job sites, fake male users were more frequently shown ads promising large salaries than fake female users. The same study also found that users visiting websites about substance abuse were shown ads for rehab programs, even though Google did not disclose that it was tracking this information via its Ads Settings tool.

"I think our findings suggest that there are parts of the ad ecosystem where kinds of discrimination are beginning to emerge and there is a lack of transparency," Anupam Datta, an associate professor at Carnegie Mellon and a co-author of the study, told MIT Technology Review. "This is concerning from a societal standpoint." Targeted advertising like Google's is so ubiquitous that the information shown to people could have tangible effects on the decisions they make, says Datta.

Discrimination and a lack of transparency

Although the research was published in March this year, comparable examples of algorithmic discrimination continue to make headlines. Earlier this month, for example, Google was forced to apologize after its new Photos app tagged pictures of black people as gorillas. And in April, researchers found that the results for a Google image search for the term "CEO" were only 11 percent female, a figure that compares poorly to real life stats, with female executives making up 27 percent of US CEOs.

The scientists from Carnegie Mellon and the ICSI stress that it's difficult to assign blame in these examples of algorithmic discrimination, mainly because the systems in play are so complex and their inner workings so opaque to the public. In the experiment which found discrimination between male and female users for example, the researchers say it could be Google's fault for targeting specific genders, or the advertisers' fault for engineering such targeting.

"For these reasons, we cannot claim that Google has violated its policies," conclude the researchers. "In fact, we consider it more likely that Google has lost control over its massive, automated advertising system." They add that "even without advertisers placing inappropriate bids, large-scale machine learning can behave in unexpected ways." In a statement sent to The Verge, Google said only that "advertisers can choose to target the audience they want to reach" and that the company "[provides] transparency to users with 'Why This Ad' notices and Ad Settings, as well as the ability to opt out of interest-based ads."

Update July 8th, 4.30AM ET: Updated to include a statement from Google.