“The nudge doesn’t focus on changing minds. It focuses on the system.”

— Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School

[In Her Words is available as a newsletter. Sign up here to get it delivered to your inbox.]

In 2014, engineers at Amazon began work on an artificially intelligent hiring tool they hoped would change hiring for good — and for the better. The tool would bypass the messy biases and errors of human hiring managers by reviewing résumé data, ranking applicants and identifying top talent.

Instead, the machine simply learned to make the kind of mistakes its creators wanted to avoid.

The tool’s algorithm was trained on data from Amazon’s hires over the prior decade — and since most of the hires had been men, the machine learned that men were preferable. It prioritized aggressive language like “execute,” which men use in their CVs more often than women, and downgraded the names of all-women’s colleges. (The specific schools have never been made public.) It didn’t choose better candidates; it just detected and absorbed human biases in hiring decisions with alarming speed. Amazon quietly scrapped the project.

Amazon’s hiring tool is a good example of how artificial intelligence — in the workplace or anywhere else — is only as smart as the input it gets. If sexism or other biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do alone.