Algorithms rule more and more of the world around us. They screen school and job applications. They determine who qualifies for loans and insurance. They trigger audits and investigations. But what’s going on under the hood? Are algorithms impersonal, and thus impartial and fair? Or can they be programmed, intentionally or otherwise, to replicate human biases? If so, then using algorithms leaves us worse off. A veneer of fairness now covers our systemic biases, making them harder to argue against or even discover.

That’s precisely the charge that Cathy O’Neil levels in her lead essay this month. The author of Weapons of Math Destruction takes us on a brief tour of how algorithms can mislead in teacher evaluations, debt collection, and several other important areas of life. She invites us to greater skepticism about artificial intelligence and recommends policy solutions that would curb the dangers of the algorithm-driven life.

Responding to her this month we have Caleb Watney of the R Street Institute, freelance journalist and former WIRED senior editor Laura Hudson, and Cato Institute Senior Fellow Julian Sanchez. Each will respond to O’Neil with an essay, and conversation among the four will continue through the month. Comments are also enabled through the month, and we invite readers to contribute to the discussion as well.