The stark reality appears to be that artificial intelligence technologies are likely to be more heavily scrutinized under 35 U.S.C. § 101 and less likely to be allowed.

Editors Note: If you are interested in this topic please see our webinar with the article authors title The Growing 101 Problem for Artificial Intelligence at the PTO.

_______________

Artificial intelligence technologies are transforming industries and improving human productivity and health. For example, artificial intelligence (A.I.) is used in vehicle navigation to identify quicker routes between destinations, in voice assistants to process natural-language commands, and in robotic systems to assist with surgeries. Yet, the use of A.I. by (or for) the everyday consumer is just scratching the surface of A.I.’s transformative potential. Artificial intelligence may provide an economic boost of approximately $14 trillion within two decades.[1]

Artificial intelligence corresponds to a large and diverse set of computer technologies that allow machines to interpret a physical or data environment and employ reasoning or problem-solving to generate a result. Task-oriented artificial intelligence can use specific constraints, goals and task definitions. Other types of A.I. can include fewer constraints, less structure and/or higher-level task definitions to facilitate more creative solutions. Artificial intelligence systems can be transparent, such that rationale supporting results are easily understood by humans. Alternatively, A.I. systems can be opaque, which is often a consequence of designing a system in which a very large number of variables (which may have interrelationships) are defined as inputs.

Importantly, the extraordinary processing capabilities of A.I. systems creates sizable design challenges. How should a program be created (in terms of what types of inputs are received, the processing that is to be performed, and the outputs that are to be generated) if it is foreseeable that it is practically impossible to “check” whether a computation is performed correctly? Many developers instead rely on merely comparing results (and not the computations themselves) from a limited test data set to those generated by the A.I. system. This illustrates additional complexities, such as how to define training, validation and test data sets? And what degree of accuracy is sufficient to potentially stop training (or is to trigger additional training)?

The United States is the global leader in artificial-intelligence research and development. For example, two Americans are credited with creating the first artificial neural network.[2] However, whether the United States will maintain this position is in question: China’s R&D investment in this technology is expected to exceed the United States’ by the end of 2018.[3] A report published by the U.S. House of Representatives’ Subcommittee on Information Technology stressed that the United States must prioritize support for this technology, as it recognized the substantial military and economic advantages that will accompany the A.I. leadership position.[4] The report emphasized the importance of maintaining the United States’ competitive strength of support of innovation, via R&D funds, regulations and respect of intellectual property.

A prominent technique for securing intellectual-property rights is to apply for a patent. Upon receiving a new application, the United States Patent and Trademark Office (USPTO) assigns the application to a particular technological class, such that an examiner having a suitable understanding of a given technology can be identified to conduct examination. One technological class (class 706) is defined as “Artificial Intelligence”. This creates the opportunity to conduct a focused review on the prospects of securing intellectual property on A.I. technology.

To investigate this the degree to which the U.S. avails patent protection for A.I. innovations, we obtained data from LexisNexis®PatentAdvisorSMthat characterized each action issued by the USPTO after July 2013. The data indicated whether each action was an allowance or a rejection (i.e., office action); whether the office action included a rejection under 35 U.S.C. § 101, the art unit of the action, and the date on which the action was issued.

The blue line in Figure 1 shows the percentage of actions issued within the A.I. technological class that were allowances. Notably, this variable does not represent the overall probability that a patent application will be allowed to issue as a patent, as a fair number of patents received multiple rejections prior to an allowance. Thus, the depicted metric is substantially lower than an allowance rate. Nonetheless, this metric is useful, as it quickly captures changes in examination approaches at the USPTO and is less dependent on applicants’ decisions (as to whether to abandon an application) than the traditional allowance-rate metric.

Notably, with respect to A.I. patent applications, the probability that a USPTO action was an allowance (versus a rejection) has recently decreased more than two-fold. This falloff largely coincided with a Federal Circuit decision: Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016). This case held that patent claims directed to monitoring and reporting on the performance of an electric power grid were ineligible for patent protection under 35 U.S.C. § 101 and thus were invalid. More specifically, the Federal Circuit held that they were ineligible as a result of merely being directed to generating, collecting and analyzing information.

The Federal Circuit’s analysis used an approach identified by the Supreme Court in Alice Corp. v. CLS Bank Int’l, 134 S.Ct 2347 (2014). There, the Supreme Court held that claims from four business-method patents (related to electronic escrow services) were (1) directed to an abstract idea and (2) not significantly more than the abstract idea. This two-part test has since been adopted by lower courts and the USPTO to assess patent eligibility. A near-immediate impact on the examination of business-method patent applications is evident in the red line of Figure 1.

To further explore whether the decrease in allowances was likely attributable to different examination approaches for assessing patent eligibility, we generated trend lines to show the percentage of office actions that included a subject-matter eligibility rejection (under 35 U.S.C. § 101). As we have previously reported, the Alice decision had an immediate and drastic effect on the prevalence of eligibility rejections issued against business-method patent applications. (See the red line in Figure 2.) The percentage of applications that received eligibility rejections increased from 58% to 95%.

While not as temporally precise, a similarly marked increase in eligibility rejections was observed for the A.I. patent applications when comparing examination prior to Electric Power Group to more recent examination. (See the blue line in Figure 2.) To illustrate, 42% of the A.I. office actions of the third quarter of 2016 included an eligibility rejection, as compared to 84% of the A.I. office actions from the third quarter of 2017.

What does that mean for artificial-intelligence applications? The stark reality appears to be that artificial intelligence technologies are likely to be more heavily scrutinized under 35 U.S.C. § 101 and less likely to be allowed. This presents two questions: Is this examination shift appropriate in view of Electric Power Group? And, from a policy perspective, is this shift acceptable?

The Court in Electric Power Group made note that: “we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”. The authors propose that this sentence of the decision is of utmost importance in the context of patenting A.I. technology. It is frequently impossible for humans to perform the calculations of A.I. systems. For example, the processing that the A.I. system performs on a given data set is often not pre-programmed[5] and may vary depending on dynamic factors, such as a time at which the input data set was processed, which other input data sets were previously processed or a random-number seed. For A.I. systems that employ repeated or continuous learning, it may even be impossible to identify the particular algorithm(s) used to process a given input data set. These complexities indicate that A.I. systems are more than just a set of algorithms. They are (frequently) frameworks that are carefully designed (in terms of input variables, hyperparameters, optimization variables, training data sets, validation data sets, etc.) in view of the reality that it may soon be difficult to impossible for a human to even understand (much less perform) the processing of the system. Thus, even though algorithms may support and define a high-level A.I. framework, they are not the same algorithms that are used to process input data. Rather, in many instances, a machine-learning framework would have been designed because an inventor believed that a human would do an inferior job of identifying specific data-processing algorithms (e.g., in terms of machine-learning parameters). Thus, the authors propose that the actual data processing performed by many A.I. systems is not performed by predefined computer algorithms and extends well beyond mental processes and abstract ideas.

From a policy perspective – should artificial-intelligence innovations be eligible for patent protection? We propose that the answer is ‘yes’, provided that other statutory requirements (e.g., novelty and non-obvious) are satisfied. This technology is exponentially increasing in its complexity (both with respect to the inner workings of the systems and the types of data to be processed) and use cases. A.I. innovations typically involve a high-level design effort to construct a computational system that can appropriately process unpredictable data. While mathematical algorithms are used as a basis for this system, they often serve as building blocks for a framework within which the system will itself learn its own operation parameters. Through strategic design efforts, we have witnessed this technology quickly extending beyond more efficiently doing what a human can do. Rather, at least in narrow applications, A.I. is achieving better results (e.g., higher accuracy, fewer errors, fewer crashes, etc.).

Thus, A.I. technologies frequently extend beyond mere implementation of pre-defined algorithms. These technologies are further rapidly becoming apparent or invisible cornerstones of our daily lives. And other countries are committed to funding and supporting A.I. technological innovations at levels soon to exceed that of the U.S. government. If we believe that the patent system promotes innovation, how could we exclude A.I. technologies from patenting opportunities in the U.S.?

_______________

[1] Purdy, M. and Daugherty, P., “How AI Boosts Industry Profits and Innovation”, Accenture, 2017.

[2] McCulloch, W. and Pitts, W., “A Logical Calculus of Ideas Immanent in Nervous Activity”, Bulletin of Mathematical Biophysics Vol. 5 Issue 4 Pages 115-133, 1943.

[3] National Science Foundation, “National Science Board Statement On Global Research And Development (R&D) Investments NSB-2018-9”, National Science Foundation, Feb. 7, 2018. Retrieved from www.nsf.gov/nsb/news/news_summ.jsp?cntn_id=244465 (last accessed September 17, 2018).

[4] Hurd, W. and Kelly, R., “Rise of the Machines”, Subcommittee on Information Technology, Committee on Oversight and Government Reform, U.S. House of Representatives, 2018.

[5] In terms of the particular processing, which can depend on learned parameters, such as weights, coefficients or support vectors.

Image Source: Deposit Photos.