Back in the 1980s, before Americans worried that algorithms were ruling their lives, the sociologist and psychologist Sherry Turkle asked a group of MIT students to consider what they would think if they encountered a computer judge. Would it be the ideal of blind justice—a truly unbiased brain deducing the fairest outcome for any set of circumstances? Or simply a machine that asserted its authority without mercy?

Turkle found that the student responses varied based on race. White students were generally wary. “Judges have to have compassion for the particular circumstances of the people before them,” one told Turkle for her 1995 book, Life on the Screen. “Computers could never develop this quality.” African American students saw things differently, not because they had greater confidence in computers but because of what they knew about people in power. A computer judge, they told Turkle, “is not going to see a black face and assume guilt. He is not going to see a black face and give a harsher sentence.”

They weren’t the first to see promise in legal technology. The law has attracted artificial intelligence researchers since the field emerged in the 1950s. When the Stanford researcher John McCarthy was asked in a 1973 debate with Joseph Weizenbaum, the MIT computer science professor and artificial intelligence skeptic, “What do judges know that we cannot eventually tell a computer?” his answer was an emphatic, “Nothing.” In a 1977 law review article, Northwestern law professor Anthony D’Amato described the idea of computer judges as a laudable, if lofty, goal—a chance to live up to the idea of the United States as a country governed by “the rule of law, not the rule of men.”

Venture capital funding for legal tech startups since 2010: $1.5 billion for 610 deals Source: Crunchbase

Alas, there are still no robot judges. Instead, Silicon Valley research that started with an admirable sense of utopian promise eventually changed into practical, market-based technology. There are now at least 600 legal tech startups operating in the United States, many of them using AI to organize bankruptcy filings, search for new patent filings, and more generally help lawyers make the strongest possible case for their clients by showing connections between past court decisions, the law, and legal arguments. Several of the most promising startups—like Ravel Law, a legal research and analytics firm, which was conceived in a Stanford Law School dorm room in 2012 and went on to raise $14 million in venture capital—have been snatched up by the legal database and search giants Westlaw and LexisNexis. (Lexis bought Ravel for an undisclosed price last year.)

The question is whether these new legal startups will lessen inequality or magnify it. Worryingly, in other fields, AI seems often to increase imbalances. Racially biased search results on Google, as Safiya Umoja Noble revealed in her book Algorithms of Oppression, enforce inequities in the flow of information; the advance of robotic technology steadily eliminates jobs for the working class and beyond. When it comes to the law, this could have severe consequences.