This is the minimum unit problem — essentially, according to the jury, an intangible “feel” was deemed enough to define a song and was, therefore, eligible for protection. That finding can cut both ways: if you ask the Gaye estate, they’d likely say that this approach protects artists and their music. But if you’re an up-and-coming artist, the implications are terrifying — how could you possibly know whether a jury, with access to the entire music catalogue of the world, would decide that a song you wrote “feels” the same as another song, even one written 40 years ago? That tension — how we reward creation as opposed to established work — lies at the very core of intellectual property, which raises all kinds of important questions when it’s applied to something beyond music, something like intelligence.

What Constitutes Minimum Viable Intelligence?

When we talk about applying intellectual property protections to AI, we are, essentially, talking about enabling people to “own” methods of learning, reasoning and participating in defined systems. In other words, continuing to allow companies to patent AI may, eventually, give them the ability to prevent us from, or punish or charge us for, thinking or reasoning. That’s about as dystopian as it gets, so let’s break it down into specifics. There are, broadly, three categories of AI in particular (as defined by the 2016 US Office of Science and Technology Policy report) that warrant individual responses.

General AI: Despite the attention that it gets in policy debates, general AI — AI capable of human-like perception and cognition, applied over a wide range of contexts and tasks — is far enough away from existence that focusing on the interesting personhood questions it raises is, mostly, a red herring.

Machine and deep learning: Machine- and deep-learning systems — in which AI interprets the rules of a system based on a large data set, and then applies those rules to current or future data processing — are capable of crunching extraordinarily large data inputs and, increasingly, are competing with human ability, particularly with pattern recognition tasks. The path a machine-learning system might take in making a “decision” is somewhat opaque and difficult to explain, even to experts. And, to its credit, the USPTO’s questions for comment focus almost exclusively on understanding the role of ownership when machine- and deep-learning models act beyond a creator’s reasonable expectations and create a copyrightable work or infringe copyright.

The race to patent machine-learning models is, basically, about the race to capture approaches to learning. The way that companies are patenting machine- and deep-learning techniques may limit who gets to learn and how — and that’s not only against intellectual property’s defining priorities, it’s extraordinarily dangerous.

Narrow AI: Like machine and deep learning, narrow AI exists in the world now — it’s the type of AI that describes systems like IBM’s chess-playing computer Deep Blue, or its question-answering Watson, which famously competed on Jeopardy! Narrow AI exists in a bounded universe, like a game, and develops optimized approaches for achieving a single (narrow) goal within that system through huge numbers of repetitions. Narrow AI is, at least in a way, explainable and defined by the system it’s applied to. This makes enforcing intellectual property law clear and simple — at least in comparison to the other types of AI.

For example, machine- and deep-learning products are unique because of their complicated processing mechanisms, which means that enforcement actions would likely have to prove, from the outside, that a person, company or competing product is using the same, patented methodology to train its AI system. Narrow AI, by contrast, uses inputs to accomplish a specific task (like playing chess or driving a car), by “learning” or “being trained” over time. It is both significantly easier to articulate and to patent an approach to playing a game or solving a particular problem than it is to patent a general approach to learning, especially because of the mechanics of enforcement. So, for example, IBM could arguably sue anyone who creates or uses a system for playing chess that looks like, in whole or in part, the way that Deep Blue plays chess. And while such a lawsuit may seem absurdly interventionist, or like the kind of thing that a jury would never allow, remember that a jury just granted one of the largest awards ever because of the “feel” of a song.