Artificial intelligence reveals how U.S. stereotypes about women and minorities have changed in the past 100 years

How do you measure the stereotypes of the past after the past is gone? You could read what people wrote and tally up the slurs, but bias is often subtler than a single word. Researchers are now developing artificial intelligence (AI) to help out. A new study has analyzed which stereotypes are still holding fast—and which are going the way of the floppy disk.

To quantify bias, one team turned to a type of AI known as machine learning, which allows computers to analyze large quantities of data and find patterns automatically. They designed their program to use word embeddings, strings of numbers that represent a word’s meaning based on its appearance next to other words in large bodies of text. If people tend to describe women as emotional, for example, “emotional” will appear alongside “woman” more frequently than “man,” and word embeddings will pick that up: The embedding for “emotional” will be closer numerically to that for “woman” than “man.” It will have a female bias.

The researchers first wanted to see whether embeddings were a good measure of stereotypes. Looking at published English text from various decades, they found that their program’s embeddings clearly lined up with the results of surveys on gender and ethnic stereotypes from the same times. Then they analyzed sentiments that had not been surveyed, using 200 million words taken from U.S. newspapers, books, and magazines from the 1910s to the 1990s.