As for much of the tech industry, 2018 has been a year of reckoning for artificial intelligence. As AI systems have been integrated into more products and services, the technology’s shortcomings have become clearer. Researchers, companies, and the general public have all begun to grapple more with the limitations of AI and its adverse effects, asking important questions like: how is this technology being used, and for whose benefit?

This reckoning has been most visible as a parade of negative headlines about algorithmic systems. This year saw the first deaths caused by self-driving cars; the Cambridge Analytica scandal; accusations that Facebook facilitated genocide in Myanmar; the revelation that Google helped the Pentagon train drone surveillance tools; and ethical questions over the tech giant’s human-sounding AI assistant. The research group AI Now described 2018 as a year of “cascading scandals” for the field, and it’s an accurate, if disheartening, summary.

But it’s not necessary to see these headlines as only negative. After all, a scandal is better than evil that goes unnoticed, and controversy can, in theory, help us to improve.

A scandal is better than evil that goes unnoticed

Take facial recognition. This has been one of the fastest moving technologies in 2018, with successes, like Chinese police identifying a criminal at a music concert, and broadcasters using the technology to identify guests at the royal wedding, but also serious problems, including bias, false positives, and other potentially-life changing errors. Police forces around the world have begun using facial recognition in the wild despite study after study showing serious flaws, and the authoritarian potential of the technology has become painfully clear in China where it’s one of many tools used to suppress the Uighur minority.

All this is unpleasant to read, but as a result of these controversies companies have begun constructing tools to combat problems of bias, and big tech firm like Microsoft are now openly calling for regulation of facial recognition. To read this news in a positive light, more controversy means scrutiny, and — in the long run — more solutions.

And despite this cascade of scandals, 2018 also saw dozens, hundreds, of hopeful and positive deployments of machine learning and AI. There were small wins, everywhere from astronomy, where machine learning spotted new craters on the moon and overlooked exoplanets; to fundamental scientific research, such as using AI to develop stronger metals and plastics; and healthcare, where there have been numerous examples of AI systems that are able to spot diseases more quickly and accurately than humans. New tools like plug-and-play machine learning services from Google and Amazon, and accessible learning courses from organizations like Fast.ai have put artificial intelligence into more hands, and the results have been largely beneficial and often inspiring.

These successes don’t balance out the bigger failures, but taken together they show that AI is a complex field. It is not moving in a single moral direction, but, like all technologies, has been taken up by a diverse array of players using it for a range of outcomes.

AI is not magic

Looking over the year as a whole one lesson stands out: AI is not magic. It is not a two-letter incantation that can be used to summon venture capital and institutional confidence at a whim; nor is it fairy dust that can be sprinkled over products and institutions for instant improvements. Artificial intelligence is a process: something to be examined, deliberated, and — if all goes well — understood. In other words, long may the reckoning continue.

Final Grade: B