ANALYSIS/OPINION:

Presidents have ordered troop surges into Iraq and Afghanistan, and President Trump is applying that strategy in another way. His Feb. 11 executive order requires a surge in research and development of artificial intelligence (AI) technologies. The order requires federal agencies to drive progress on AI through their own research and development and through investing in R&D in the private sector and academia.

“Artificial intelligence” and “machine learning” are terms used interchangeably despite the vast difference between them. To analyze where we need to go, we first need to define the terms.

Computer programs contain “algorithms,” i.e., unambiguously-defined procedures that enable the machine to solve a problem. Because they are unambiguous they don’t allow the machine to interpret data subjectively or in a larger context created by learning and experience, as the human mind can.

Machine learning takes computer functionality to the next level. By feeding computers vast libraries of good data, their algorithms can “learn.” Machine learning algorithms can discern among data to determine which data the computer should process. Machine learning enables computers to react to patterns, predict results and, for example, play chess as well or better than humans.

The best example of machine learning was the 2008 Stuxnet computer “worm” attack on Iran’s nuclear enrichment centrifuges. The challenge was enormous because the computers running the centrifuges aren’t connected to the Internet.

Stuxnet’s algorithms were taught to distinguish the specific computers running the Iranian centrifuges from all others and only attack them. It thus can reside on any “host” computer without attacking it.

By a number of cyber attacks, Stuxnet was inserted into “host” computers (which were connected to the Internet) believed to be used by people servicing the centrifuge site.

When those people brought their computers into the Iranian nuclear site and connected them to computers inside Stuxnet automatically inserted itself into the latter. It then recognized the specific targets and attacked, causing enormous physical damage by making the centrifuges spin too quickly and wreck themselves.

Because Stuxnet spreads automatically, its presence has since been detected in computer systems worldwide where it resides harmlessly.

AI builds upon machine learning and takes it several quantum leaps farther. Real AI algorithms will enable a computer to function in approximately the same way as the human mind, perceiving, judging and solving problems both objectively and subjectively. AI computers will discern problems, not only through direct data input but by exploring the Internet and other computer systems to which it can (overtly or covertly) gain access. They will be able to reprogram their own algorithms to improve their capabilities.

AI will enable a computer — through human-like senses such as visual and audio recognition — to detect a problem, analyze it subjectively, and find a solution within the computer’s domain, which could be anything from medical diagnosis to intelligence gathering. Implementation, perhaps autonomously, raises profound ethical and moral questions.

How much autonomy, and in what respects, should machines have? The certainty that our adversaries will not constrain their AI technologies on moral or ethical grounds requires a serious debate of whether and to what extent our AI must be so constrained.

True AI hasn’t been achieved but many nations, including Russia, China, Iran and North Korea are working hard to develop it. China, the best example, is reportedly concentrating on AI as a means of future warfare.

According to a February 2019 report by the Center for New American Security, China views AI as the new focus of international competition. Chinese President Xi Jinping held an October 2018 meeting of the Chinese Politburo on the subject — which is only done for the highest-priority issues at which the Politburo decided that China should lead the world in AI technology and reduce its dependence on foreign sources.

China is seeking to free itself of foreign dependencies on machine learning — no machine learning technologies are known to have been developed by China — and on equipment such as semiconductors.

That report states that while some Chinese leaders are discussing international limits on AI modeled after arms control agreements, their conduct is entirely inconsistent with that idea.

In the context of military and national security matters, China is pushing drone technology with increased machine learning and autonomy. It is also pushing machine learning and the basis for AI into intelligence gathering and analysis as well as command and control functions.

According to the CNAS report, in October 2018 Maj. Gen. Ding Xiangrong, deputy director of the General Office of China’s Central Military Commission — one of China’s principal governing bodies — gave a speech in which he defined China’s military goals in terms of “narrow[ing] the gap between the Chinese military and global advanced powers” by taking advantage of the “ongoing military revolution centered on information technology and intelligent technology.”

There is every reason to believe that Gen. Xiangrong’s statements reflect the highest-level of Chinese thinking and planning, and that other adversaries’ thinking and pursuit of AI is, like China’s, extending to intelligence gathering and analysis.

Mr. Trump’s executive order is a good start, but it probably isn’t enough to bring about a real surge in AI development. He, and our defense and intelligence communities, need to go much farther and faster because the ongoing military and intelligence revolution centered on AI won’t wait for us.

• Jed Babbin, a deputy undersecretary of Defense in the George H.W. Bush administration, is the author of “In the Words of Our Enemies.”

Sign up for Daily Opinion Newsletter Manage Newsletters

Copyright © 2020 The Washington Times, LLC. Click here for reprint permission.