Interest in machine learning may be at an all-time high. Per Google Trends, people are searching for machine learning nearly five times as often as five years ago. And at the University of California San Diego (UCSD), where I’m presently a PhD candidate, we had over 300 students enrolled in both our graduate-level recommender systems and neural networks courses.

Much of this attention is warranted. Breakthroughs in computer vision, speech recognition, and, more generally, pattern recognition in large data sets, have given machine learning substantial power to impact industry, society, and other academic disciplines.

Like the internet before it, machine learning now stands poised to impact diverse areas of the economy. Whether you work in government, medical imaging, car services, the law, or recycling, there’s good reason to believe that machine learning will impact your life and finances.

For a researcher, there are many upsides to the current wave of enthusiasm. At this moment, machine learning presents fertile ground for scholarship. Across both the theory ⇔ hardware and methodology ⇔ application axes, there’s ample impactful work to be done. Even technically incremental work can have immediate effects in the real world. Accordingly, funding agencies, universities and industry have an enormous appetite for both empirically and theoretically motivated inquiry into machine learning.

[Of course, the rapid advance of machine learning presents many new challenges. One purpose for founding this blog is to address the social impacts of machine learning on the world, e.g. via technical unemployment or algorithmic decision-making. However, in this post we address specifically the spread of information/misinformation about AI and not the effects of the AI itself]

Complicating matters, the wave of enthusiasm for / interest in machine learning has spread far beyond the small group of people possessing a concrete sense of the state of machine learning. Both in industry and the broader public, many people know that important things are happening in machine learning but lack any concrete sense of what precisely is happening.

A Perfect Storm

This pairing of interest with ignorance has created a perfect storm for a misinformation epidemic. The outsize demand for stories about AI has created a tremendous opportunity for impostors to capture some piece of this market.

When founding Approximately Correct, I lamented that too few academics possessed either the interest or the talent for both expository writing and for addressing social issues. And on the other hand, too few journalists possess the technical strength to relate developments in machine learning to the public faithfully. As a result, there are not enough voices engaging the public in the non-sensational way that seems necessary now.

Unfortunately, the paucity of clear and informed voices has not resulted in a silent media. Instead, the void has been filled by charlatans, bombarding the public with incessant misinformation, much of it spread by opportunists, eager to seize upon the public’s interest.

As I quickly learned as a young PhD student blogging for KDnuggets, articles about deep learning get lots of clicks. Everything else being equal, in my experience, articles about deep learning attracted on the order of ten times as many eyeballs as other data science stories. This has made deep learning especially ripe for exploitation by click-baiters, including journalists in the traditional media who might possess the writing chops but are out of their depth – see the recent Maureen Dowd piece on Elon Musk, Demis Hassabis, and AI Armaggedon in Vanity Fair for a masterclass in low-quality, opportunistic journalism.

As public interest and media mania have grown in tandem, so has my anxiety that the spread of misinformation poses a danger. Here are some of the clues that tip me off that something is seriously wrong:

Widely circulated lists of the AI Influencers, even as published by companies like IBM that should know better, are populated mostly by people with no discernible contribution (or even expertise) in AI.

With startling regularity, if I tell an educated person what I do, a question quickly follows in reference to the Singularity, a quasi-religious event with no reliable definition or connection to science that has been prophesied for the coming century and popularized by Ray Kurzweil.

Despite the increasing familiarity of machine learning in the media, the quality of journalism hasn’t improved appreciably. With notable exceptions, newspapers and magazines of record haven’t risen to the challenge of providing terrestrial coverage on the topic despite the serious potential consequences for readers.

If machine learning had no societal impact, the present situation might not be so alarming. After all, the cartoonish descriptions of string theory and particle physics seem relatively harmless, since developments in particle physics show no sign of impacting free speech or the employment markets in the near future.

For several weeks, I’ve wanted to write a post dissecting the misinformation epidemic but I’ve been stifled by the sheer scale of the problem. To make the topic digestible, I’ve decided to write a multi-part series of posts, addressing the following narrower topics:

THE INFLUENCER INDUSTRY

This post will address the world of AI influencers, some legitimate researchers with outsize influence, others self-designated experts whose influence owes to the Paris Hilton effect, deriving greater celebrity from existing celebrity through a social media feedback loops and TED-style puff talks.

THE PROPHETS (Profits?) OF FUTURISM

Here, I’ll address the outsize coverage garnered by futurists. The most widely covered ideas, many inconcrete and unfalsifiable, have the flavor of mysticism. And yet, wrongly, they are frequently given a seat at the table alongside more sober reason, absent qualification. I plan to take a critical look at these ideas, e.g., the Singuarity, Ancestor Simulations, and their coverage in social and mainstream media.

THE FAILURE OF THE PRESS

In this piece, I’ll analyze the mainstream press’s attempts to write about machine learning. I will try to distill the common failure modes (e.g. failing to distinguish between real technology and fictional technology, overstating the brain-like nature of neural networks, and injecting expert opinions despite lacking expertise).

THE PERILS OF MISINFORMATION

Finally, I plan to analyze the various dangers presented by this misinformation. Namely, if people are unable to understand the complex phenomena impacting their lives, then we cannot reasonably expect politicians to argue about the topic clearly, or for voters to make informed decisions. Absent broad (and clear) understanding of the state of machine learning research, we cannot expect a democratic society to regulate the use of machine learning or to manage the economic consequences reasonably and appropriately.

Updates