Most of the time, artificial intelligence is simply a tool that helps inventors—for example, by synthesizing enormous data sets to find promising drugs or discover new materials. But what would happen if it were fully responsible for the act of invention itself?

That’s what Ryan Abbott, a patent attorney and professor of health sciences at the University of Surrey in the United Kingdom, wanted to put to the test.

“If I write a Word document with Microsoft Word, that doesn’t make Microsoft Word an author, and if use an Excel spreadsheet, that’s not making Excel an inventor of a patent I make,” says Abbott, who is also one of the lawyers working on the Artificial Inventor Project. But, he says, perhaps there are times when a piece of software or an algorithm should be considered the inventor.

Back in August, the AIP experts filed patents for two inventions—a warning light and a food container—on behalf of Stephen Thaler, who is CEO of a company called Imagination Engines.

Instead of listing a human author on the applications, the inventor was listed as Dabus AI, an AI system that Thaler spent over a decade building. Dabus AI came up with the innovations after being fed general data about many subjects. Thaler may have built Dabus, but he has no expertise in creating lights or food containers, and wouldn’t have been able to generate the ideas on his own. And so, the AIP team argues, Dabus itself is the rightful inventor.

The UK and European offices considered the inventions themselves worthy of patents, but both patent offices recently rejected the applications because the “inventor” was not a human. As a result, the devices are not under patent protection. (The US Patent and Trademark Office is still evaluating and is requesting comment on questions related to AI and intellectual property law.)

Already, Abbott is planning to appeal the decisions. He believes there will be more and more cases where AI should be considered a genuine inventor and that the law needs to be ready. “At stake in this discussion is the future of innovation,” he says. Not allowing AI be recognized as an inventor is not only morally problematic, he says, but will lead to unintended consequences.

So what are they arguing?

First things first: Nobody is arguing that the AI should own the patent. It’s common for the inventor of a patent to be an individual, while its owner is the company that employs the inventor. In this case, Abbott argues that the inventor is Dabus AI and the owner would be Thaler.

It sounds simple enough, except that patent law has very specific ways of assigning ownership: The inventor must be either the employee or the contractor of the parent company. But those are both legal categories, and an AI fits neither, explains Peter Finnie, an IP expert at Potter Clarkson. This alone is grounds enough for the applications to be rejected, before even getting into the requirement that inventors be individuals and “natural persons.” (Animals are also not allowed to hold intellectual property under copyright law, as was determined in the “monkey selfie” case.)

A more fundamental problem is that we’re nowhere near general artificial intelligence, so few people will believe that the AI is truly the inventor. It’s far more common, adds Finnie, for businesses to talk about “computer-assisted innovation.”

Plus, being an inventor comes with certain responsibilities. “If AIs were inventors, they’d also have to be able to enter into contracts,” says Chris Mammen, an IP lawyer at Womble Bond Dickinson. They’d have to be able to authorize licenses and file lawsuits as well. They can do none of those things. A few years ago, policymakers in the European Union discussed creating a category of “electronic personality,” but that fizzled out in part because of these practical considerations.

“I won’t dispute that AIs are really good at solving problems and solving them in ways that are new and different and that people could maybe never come up with,” says Mammen. “But as a policy matter, I’m not sure that our patent system is the right tool to reward the development of those kinds of solutions.”

It’s possible to imagine a situation in which no humans make a major contribution to an innovation, but “I’m not sure we’re there yet,” he adds.

The case for the AI inventor

For Abbott, the fact that we are not at the point where machines are routinely inventors is part of the point: society, he argues, needs to figure this out early.

He acknowledges that AI doesn’t just spring into existence—it must be coded and trained and fed data—but that doesn’t necessarily mean everything an AI creates can or should be traced back to humans. Hundreds or thousands of people might be involved in programming IBM’s supercomputer Watson with general problem-solving capabilities, but “if Watson then applies those capabilities and solves a particular problem in a way that results in a patent, it’s not clear that anything any of those people have done qualifies them to be an inventor,” Abbott says.

But if humans can’t be listed as inventors because they weren’t intimately involved, and the AI can’t be listed as an inventor either, then the invention may not be patentable at all. This, Abbott suggests, could be problematic. It could prevent companies from investing money in AI technologies and prevent breakthroughs in important areas like drug discovery. While there may not be much social good to be gained by giving rights to an AI, he says, there is good to be gained by changing intellectual-property law to acknowledge its contribution.