December was a big month for advocates of regulating artificial intelligence. First, a bipartisan group of senators and representatives introduced the Future of A.I. Act, the first federal bill solely focused on A.I. It would create an advisory committee to make recommendations about A.I. — on topics including the technology’s effect on the American work force and strategies to protect the privacy rights of those it impacts. Then the New York City Council approved a first-of-its-kind bill that once signed into law will create a task force to examine its own use of automated decision systems, with the ultimate goal of making its use of algorithms fairer and more transparent.

Perhaps not coincidentally, these efforts also overlap with increasing calls to regulate artificial intelligence along with claims by the likes of Elon Musk and Stephen Hawking that it poses a threat to humanity’s literal survival.

But this push for broad legislation to regulate A.I. is premature.

To begin with, even experts can’t agree on what, exactly, constitutes artificial intelligence. Take the recent report released by the AI Now Institute, aimed at creating a framework for ethically implementing A.I. While itself focused on A.I., the report also acknowledges that no commonly accepted definition of A.I. exists, which it describes loosely as “a broad assemblage of technologies … that have traditionally relied on human capacities.”

“Artificial intelligence” is all too frequently used as a shorthand for software that simply does what humans used to do. But replacing human activity is precisely what new technologies accomplish — spears replaced clubs, wheels replaced feet, the printing press replaced scribes, and so on. What’s new about A.I. is that this technology isn’t simply replacing human activities, external to our bodies; it’s also replacing human decision-making, inside our minds.