An observation made in the briefing report War-Algorithm Accountability (featured in a Readings post last week) is that weapons are not the only military systems increasingly defined by algorithms, automation, and autonomy. Nor are weapons the only military system that, if significantly automated or programmed to act autonomously, raises issues of reliability, safety, and accountability. No less than Google, Tesla, and the world's automobile manufacturers, the U.S. military has a keen interest in self-driving vehicles that can be deployed in combat for many possible functions, such as logistics and re-supply.

An Army truck delivering supplies is not a weapon system and so is not subject to the law of weapons or the requirements of Department of Defense (DoD) legal weapons review (absent special conditions). However, an Army self-driving supply truck raises concerns about reliability, safety, decision-making capabilities, and accountability similar to those raised by self-driving cars on ordinary civilian roads (in addition to other, military-specific issues). At the recent Association of the U.S. Army Annual Meeting and Exposition 2016 (AUSA 2016), BreakingDefense.com Editor Colin Clark did a video interview with the U.S. Army's chief roboticist, Bob Sadowski, about the path forward for deploying self-driving combat vehicles and the challenges that they present. The video interview is very interesting from the standpoint of assessing the similarities and differences between combat requirements for self-driving vehicles and those for civilian roads, as well as in providing a broad sense of the U.S. Army's robotics program.

Broadly speaking, military self-driving combat vehicles face all the technological challenges of ordinary self-driving cars, plus a bunch more. One difference, for example, is that a military vehicle needs far greater capability than a Google car to operate, whether on roads or otherwise, without the benefit of the intense multi-sensor mapping that Google engineers perform on roads that the Google vehicle will traverse. The Google vehicle starts out with an extraordinarily detailed internal map, and is thus able to look for changes in what the car's sensors detect while driving—pedestrian, bike, etc.—rather constructing an internal representation of its driving environment de novo. Such advance mapping won’t always be possible for military vehicles deployed in combat conditions. Military self-driving vehicles might also require an option for a human operator to take over driving the vehicle by remote real-time connection, as well as "blended," part self-driving, part remotely-operated modes of operation.

Reliability, for purposes of safety and mission performance, looms large as a hurdle for emerging military self-driving vehicles to meet. This is true of civilian self-driving cars as well, of course. Reliability testing for complex software systems—whether for self-driving vehicles, autonomous or highly automated weapons, algorithmic high-speed trading in financial markets, etc.—is key. But testing and verifying reliability are not simply problems of engineering; as these systems increasingly interact with human beings, it is inevitable that they will be subjects of law and regulation, technology by technology.

In the United States, self-driving cars are legally subject to state-specific vehicle codes—but with respect to the design and manufacturing of self-driving vehicles, the U.S. Department of Transportation/National Highway Traffic Safety Administration recently issued a list of issues that designers of self-driving vehicles should take into account. This issues list (which are not regulation—yet) should be of interest to the U.S. military in its pursuit of self-driving vehicles, partly for what they indicate about regulatory concerns, but also because the U.S. military will not re-invent the wheel and instead will use, and adapt, technologies developed for the civilian sector—technologies increasingly regulated by civilian regulatory agencies such as the Department of Transportation.

The Readings section will soon offer a guest-post discussing the DoT/NHTSA policy list and other issues of civilian self-driving vehicle regulation. They are relevant policy considerations in approaching problems of reliability, safety, functionality, and accountability raised by both ordinary civilian self-driving cars and U.S. military self-driving vehicles. More generally, those in national security technology/law areas could probably benefit from awareness of the approaches being taken by civilian regulatory agencies with respect to self-driving cars, civilian drones, and a host of other emerging technologies underpinned by complex software systems and artificial intelligence. Moreover, important policy issues of both civilian and military regulation extend beyond the realms of cyber, the Internet, Big Data, and other fundamentally software-based technologies. Robotic technologies involving physical machines interacting with human beings through increasingly complex and capable algorithms and AI, such as self-driving cars or highly automated weapon systems, present their own perplexing policy and regulatory issues.

A final observation about reliability testing for complex systems, whether civilian or military: In important ways, one of the most developed regimes for testing complex systems in a way that melds together law/regulation and technology is the process of legal weapons review under requirements of the laws of war. In the U.S. military, this review is conducted by DoD lawyers, along with technical specialists. Weapon systems, even when not "autonomous" or "highly automated" with respect to target selection or engagement decision, often involve complex software and algorithmic programming, we well as multiple complex software systems for distinct components of the weapon system.

The DoD has a great deal of experience in establishing protocols for testing weapon systems against the legal requirements of LOAC. Those in non-military sectors trying to figure out protocols for design, reliability testing, etc., for ordinary civilian technologies involving complex algorithmic programming in the context of physical-machine operation might consider examining the DoD’s technical processes of weapons review, and the way in which technical engineering issues are drawn together with legal requirements.

At the same time, DoD faces many new, big challenges as AI-enabled robotic military systems of all kinds are integrated into the U.S. military, including weapon systems that might be designed with increasingly automated functions. Thus there may be a great deal that DoD lawyers and other associated with the process of legal weapons review could learn from private technology industries—not just the defense industry giants and manufacturers of weapon systems that already interact extensively with DoD, but Silicon Valley firms that are grappling with similar questions as they seek to bring new, socially valuable, but potentially dangerous robotic devices to ordinary consumers. Are there insights about reliability and testing that DoD could gain from interacting with the tech industry (rather than simply the defense industry)? And are there also potential insights that the tech industry might learn from the existing extensive processes of legal weapons review that DoD has long undertaken?