Silicon Valley sells progress, and so it’s no wonder that the Valley has generally embraced the positive hype about artificial intelligence today. Hopeful new start-ups bang the drum of AI, and expect to ride some of the wave of excitement into venture capital and future success. Yet an eclectic bunch of investors and iconoclasts in the Valley have also plummeted head long into worries over AI coming too soon, and changing human society too fast. Most of those concerns focus on the singularity, a soon-to-arrive crossover point in the affairs of man and machine, where machines overtake human intelligence, and we cease to be the most interesting feature of the planet.

Elon Musk, the founder of Tesla and SpaceX, has openly speculated that humans could be reduced to “pets” by the coming superintelligent machines. Musk has donated $10 million to the Future of Life Institute, in a self-described bid to help stave off the development of “killer robots.” At Berkeley, the Machine Intelligence Research Institute (MIRI) is dedicated to addressing what Bostrom and many others describe as an “existential threat” to humanity, eclipsing previous (and ongoing) concerns about the climate, a nuclear holocaust, and other major denizens of our modern life. Luminaries like Stephen Hawking and Bill Gates have also commented on the scariness of artificial intelligence.

The idea that AI represents a clear and present danger has an old pedigree. As far back as 2000, Bill Joy, the former chief technology officer of now-defunct Sun Microsystems, penned one of the most famous apocalyptic rants about the threat to humanity from AI ever, in his article “Why the Future Doesn’t Need Us”—published by (who else?) Wired, and widely discussed as the new century began. Yet the message was drowned out by more palpable worries: the terrorist attacks of September 11, 2001. Today, over a decade later, Joy’s anxiety over killer robots, made possible by rapid advances in AI, is back. It competes now with encomiums to AI as the milestone of our future success.

Overly ebullient discussion of smart gadgets and AI has always adorned the glossy pages of magazines like Wired. Of late, however, the once seemingly academic and speculative subject has spread to major media, too. The New York Times worries about “Artificial Intelligence as a Threat,” in a November 2014 article (curiously appearing in Fashion and Style). John Markoff, a technology writer for The Times, has written dozens of articles on the topic, with titles leaving little to the imagination: “The Rapid Advance of Artificial Intelligence,” to pick one. Many other media outlets publish similar stories. AI, it seems, is coming—and fast.

Media attention isn’t limited to magazines and media publications. Nonfiction books about the positive potential of AI have also exploded in recent years. Erik Brynjolfsson and Andrew McAfee, both of MIT’s Center for Digital Business and the Sloan School of Management, argue in their 2014 book, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, that we’re rapidly entering a new era altogether, as machines begin assuming roles that were once the sole purview of humans. From robotics in manufacturing to personalization on the web, AI is changing the landscape of the new economy, they argue. Mostly, the machine age is a benefit, as boring or dangerous jobs are passed off to machines, and interesting work is helped along by intelligent computing assistants. Artificial Intelligence is upon us, say Brynjolfsson and McAfee, but it’s basically wonderful news: for business, for our standards of living, and for the future of humanity.