The United States Army wants to develop a system that can be quickly integrated and deployed into its weaponized drone fleet to automatically Detect, Recognize, Classify, Identify (DRCI) and target enemy combatants and vehicles using artificial intelligence (AI). This is an impressive leap forward, whereas humans still operate current military drones, this technology could foster a new era of autonomous drones conducting operations in hybrid wars — without human oversight.

The project is called “Automatic Target Recognition of Personnel and Vehicles from an Unmanned Aerial System Using Learning Algorithms,” — a very original name, which the details were recently released on the Small Business Technology Transfer (STTR)website. In other words, the Department of Defense (DoD) via the Army is requesting private and research institutions that have developed image targeting AI platforms to form partnerships with them for the eventual technology transfer.

Once the technology transfer is complete, these drones will use machine-learning algorithms, such as neural networks blended with artificial intelligence to create the ultimate militarization of AI. Currently, military drones have little onboard intelligence, besides sending a downlink of high definition video to a military analyst who manually decides whom to kill.

Here is the program’s objective:

“Develop a system that can be integrated and deployed in a class 1 or class 2 Unmanned Aerial System (UAS) to automatically Detect, Recognize, Classify, Identify (DRCI) and target personnel and ground platforms or other targets of interest. The system should implement learning algorithms that provide operational flexibility by allowing the target set and DRCI taxonomy to be quickly adjusted and to operate in different environments.”

A full description of the program:

“The use of UASs in military applications is an area of increasing interest and growth. This coupled with the ongoing resurgence in the research, development, and implementation of different types of learning algorithms such as Artificial Neural Networks (ANNs) provide the potential to develop small, rugged, low cost, and flexible systems capable of Automatic Target Recognition (ATR) and other DRCI capabilities that can be integrated in class 1 or class 2 UASs. Implementation of a solution is expected to potentially require independent development in the areas of sensors, communication systems, and algorithms for DRCI and data integration. Additional development in the areas of payload integration and Human-Machine Interface (HMI) may be required to develop a complete system solution. One of the desired characteristics of the system is to use the flexibility afforded by the learning algorithms to allow for the quick adjustment of the target set or the taxonomy of the target set DRCI categories or classes. This could allow for the expansion of the system into a Homeland Security environment. ”

Once the Army selects a private or research institution in the form of a joint venture, the partnership will allow both entities to further technological innovation in AI drone killing. Before the commercialization of this new dangerous weapon, these three phases must be completed first:

“PHASE I: “Conduct an assessment of the key components of a complete objective payload system constrained by the Size Weight and Power (SWAP) payload restrictions of a class 1 or class 2 UAS. Systems Engineering concepts and methodologies may be incorporated in this assessment. It is anticipated that this will require, at a minimum, an assessment of the sensor suite, learning algorithms, and communications system. The assessment should define requirements for the complete system and flow down those requirements to the sub-component level. Conduct a laboratory demonstration of the learning algorithms for the DRCI of the target set and the ability to quickly adjust to target set changes or to operator-selected DRCI taxonomy.” “PHASE II: Demonstrate a complete payload system at a Technology Readiness Level (TRL) 5 or higher operating in real time. On-flight operation can be simulated. Complete a feasibility assessment addressing all engineering and integration issues related to the development of the objective system fully integrated in a UAS capable of detecting, recognizing, classifying, identifying and providing targeting data to lethality systems. Conduct a sensitivity analysis of the system capabilities against the payload SWAP restrictions to inform decisions on matching payloads to specific UAS platforms and missions.” “PHASE III: Develop, integrate and demonstrate a payload operating in real time while on-flight in a number of different environmental conditions and providing functionality at tactically relevant ranges to a TRL 7. Demonstrate the ability to quickly adjust the target set and DRCI taxonomy as selected by the operator. Demonstrate a single operator interface to command-and-control the payload. Demonstrate the potential to use in military and homeland defense missions and environments.”

Interesting enough, The Conversation believes once these AI drones are commercialized, there will be “vast legal and ethical implications for wider society.” Nevertheless, the sphere of warfare could soon expand to include technology companies, engineers, and scientists, who would be labeled as valid military targets because of their involvement in building code for the machines.

The Conversation makes a stunning revelation about the legal implications of Silicon Valley technology firms who provide lines of code to autonomous drone weapon systems. Under the international humanitarian law, “dual-use” facilities – “those which develop products for both civilian and military application – can be attacked in the right circumstances.”

“The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists. The legal implications of these developments are already becoming evident. Under the current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars. With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.”

The Conversation reminds us of the recent events of autonomous AI in society “should serve as a warning.”

“Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended autonomous drone deaths as computer bugs are ironed out.”

If militarized AI machines are left to the decision-making of who dies... We ask one simple question: how many non-combatant deaths will count as acceptable to the Army as the AI drone technology is refined?