We owe a lot to 9th century Persian scholar Muhammad ibn Musa al-Khwarizmi. Centuries after his death, al-Khwarizmi's works introduced Europe to decimals and algebra, laying some of the foundations for today’s techno-centric age. The latinized version of his name has become a common word: algorithm. In 2017, it took on some sinister overtones.

Take this exchange from the US House Intelligence Committee last month. In a hearing about Russian interference in the 2016 election, the panel’s top Democrat, Adam Schiff, threw this accusation at Facebook’s top lawyer Colin Stretch: “Part of what made the Russia social media campaign successful is that they understood algorithms you use that tend to accentuate content that is either fear-based or anger-based.”

Algorithms that amplify fear and help foreign powers put a finger on the scale of democracy? These things sound dangerous! That’s a shift from just a few years ago, when “algorithm” primarily signified modernity and intelligence, thanks to the roaring success of tech companies such as Google---an enterprise founded upon an algorithm for ranking web pages. This year, growing concern about the power of technology companies---a cause uniting some unlikely fellow travelers---has leant al-Khwarizmi’s eponym a newly negative aura.

In Februrary, the congregation of digital elite at TED received a warning about “algorithmic overlords” from mathematician Cathy O’Neil, author of the book Weapons of Math Destruction. Algorithms used by Google’s YouTube to curate videos for children earned hostile headlines for censoring inoffensive LBGT content, and steering kids towards disturbing content. Meanwhile, academic researchers demonstrated how machine-vision algorithms can pick up stereotyped views of gender and how governments using algorithms in areas such as criminal justice shroud them in secrecy.

No wonder that when David Axelrod, formerly President Obama’s chief strategist, spoke to the Nieman Journalism Lab last week about his fears for the future of media and politics, the A-word sprang to his lips. “Everything is pushing us toward algorithm-guided, customized offerings,” he said. “That worries me.”

Frank Pasquale, a professor at the University of Maryland, gives Facebook special credit for dragging algorithms through the mud. “The election stuff really got people understanding the implications of the power of algorithmic systems,” he says. The concerns are not entirely new---the debate about Facebook encompassing users inside thought-muffling “filter bubbles” started in 2011. But Pasquale says there’s now a stronger feeling that algorithms can and should be questioned and held to account. One watershed, he says, was a 2014 decision by the European Union’s highest court that granted citizens a “right to be forgotten” by search engines like Google. Pasquale calls that an early “skirmish about the contestability and public obligation of algorithmic systems.”

Of course the accusations fired at Facebook and others shouldn’t really be aimed at algorithms or math, but at the people and companies who create them. It’s why Facebook’s chief counsel appeared on Capitol Hill, not a cloud server. “We can’t view machine learning systems as purely technical things that exist in isolation,” says Hanna Wallach, a researcher at Microsoft and professor at UMass Amherst trying to increase consideration of ethics in AI. “They become inherently sociotechnical things.”

There’s evidence that some of those toiling in Silicon Valley’s algorithmic mines understand this. Nick Seaver, an anthropologist at Tufts, embedded inside tech companies to learn how workers think about what they create. “‘Algorithms are humans too,’ one of my interlocutors put it,” Seaver writes in a paper on the term’s fuzziness, “drawing the boundary of the algorithm around himself and his co-workers.”