We all love policies, except when those policies get in the way of something we want to do.

“Oh, I’m sorry, it’s just policy,” says the cable TV operator, or the airline agent, or, even the IT administrator.

But at least these are people. You can attempt to appeal to them. You can ask to speak to their manager. You can threaten to take your business elsewhere. Because these are people, and at least you have some idea of what’s blocking you.

What if it’s a computer program? One that you don’t even know about? What if it’s making decisions based on invalid information, and there’s no route to appeal?

Recent news events are leading some to call for making such decision-making algorithms accountable, so they can be audited and whatever incorrect assumptions they’re making brought to light and fixed.

This is particularly true in the case of artificial intelligence. In one example, researchers were showing off a program they’d developed to objectively, or so they thought, judge a beauty contest. They saw it as a great example of their algorithm—until it was pointed out that the program systematically thought that people with dark skin were less attractive, primarily because the majority of the pictures the program used as a basis for comparison were of white people.

Of course, this is a mundane, even frivolous, example. But what happens if this sort of institutional prejudice gets systematized into an algorithm? What if people with dark skin, or women, or short people, all end up being less able to get loans because the algorithm decrees—correctly or not—that such people are more likely to default on their loans?

Other examples of algorithms have also given more serious evidence of bias, writes Christian Sandvig in the Journal of the New Media Caucus:

Google search, for sorting some Web search results as more relevant than others

Facebook, which manipulated its News Feed algorithm in an attempt to test the effect on users

Twitter “trending topics,” which was accused of misrepresenting the scale of public protests during the Occupy demonstrations

Ironically, the reason algorithms were implemented in the first place was often to avoid such personal biases, writes Eben Harrell in Harvard Business Review. The racism and other biases in these examples isn’t intentional—it’s a product of our society—but it does have influence, so Google and other vendors do have to take responsibility, notes data scientist Cathy O’Neil.

“Hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy,” writes Frank Pasquale in his book, The Black Box Society: The Secret Algorithms That Control Money and Information. “Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy. Even after billions of dollars of fines have been levied, underfunded regulators may have only scratched the surface of this troubling behavior.”

For example, a number of studies have found that a problem—whether it’s hurricane damage or potholes—is more likely to be reported from an affluent area, where people have knowledge, Internet, and smartphones, writes Kate Crawford in Harvard Business Review. As with the beauty contest, this can slant the results. “We can think of this as a ‘signal problem.’ Data are assumed to accurately reflect the social world, but there are significant gaps, with little or no signal coming from particular communities,” she writes. “As we move into an era in which personal devices are seen as proxies for public needs, we run the risk that already existing inequities will be further entrenched.”

In many cases, these algorithms are protected as trade secrets, Sandvig writes. But even if the companies revealed them, they and their effects could be difficult to understand due to their complexity, writes Lev Manovich in The Chronicle of Higher Education. “How can we discuss publicly the decisions made by Google Search algorithms, or Facebook’s algorithms controlling what is shown on our news feeds?” he writes. “Even if these companies made all their software open source, its size and complexity would make public discussion very challenging. While some of the details from popular web companies are published in academic papers written by researchers working at these companies, only people with computer-science and statistics backgrounds can understand them.”

Consequently, some experts are calling for “auditing algorithms,” or examining algorithms for this sort of unconscious bias. And algorithms are facing more scrutiny. A recent court case ruled that judges had to be informed that one algorithm, predicting the future criminality of a defendant, might not be accurate, while the European Union has implemented a regulation limiting the use of such algorithms, starting in 2018. The White House has also called for the industry to police itself.

The challenge, of course, is how to design such an audit, especially for organizations that consider those algorithms a trade secret or otherwise don’t wish to make them available. “Discovering how algorithms behave on the Internet might then lead to a difficult but important discussion that we have so far avoided: How do we as a society want these algorithms to behave?” write researchers in their paper on the subject.



