The Department of Defense (DoD) plans on investing $40 million to “enhance and accelerate” the use of artificial intelligence in the Army, Navy, and Air Force, as well as to create an artificial intelligence commission within the Secretary of Defense to oversee everything related to artificial intelligence throughout the DoD. $40 million is, well, a lot of money. Although some tech workers at companies like Google are protesting their employers’ pursuit of DoD contracts, an expensive DoD software contract could be difficult to turn away from — and even if one company does turn away, there’s a slew of others that will be more than eager to take their place.

For the military, a finely tuned artificial intelligence system could do everything from planning battlefield operations, to singling out drone strikes targets, to piloting manned or unmanned planes or trucks, to predicting the probability of an attack by opposing forces. This is the first time that the DoD has allocated funding for the research of artificial intelligence internally.

According to the National Defense Authorization Act (NDAA) for the 2019 fiscal year — released on July 23 after the House and Senate agreed on the terms of the bill — the Air Force specifically will get $20 million to distribute artificial intelligence research, support activities, and “battlespace knowledge development. Meanwhile, the Army and Navy will get $5 million each for artificial intelligence research. The still-nonexistent artificial intelligence commission will get $10 million in initial funding to get its feet off the ground.

The NDAA marks a major move toward the DoD seriously prioritizing military uses of artificial intelligence, calling for DoD-specific workers, labs, and test ranges for military applications of artificial intelligence.

Artificial intelligence could, in a perfect world, reduce the number of deaths due to human error. Drone strikes, for instance, have notorious issues with accuracy and undercounted deaths. But artificial intelligence also has the potential to automate and dehumanize acts of war and shift the burden of murder to a machine rather than a human — even if, for now, it’s still a human pulling the trigger. But the AI commission will have to submit an official definition of the term “artificial intelligence” as well as ethical and legal regulations.

In the private sphere, companies are already thinking seriously about how they plan, or don’t plan, on deploying artificial intelligence to take human lives. The Tech Workers Coalition — made up of employees for companies like Google, Amazon, Microsoft, and IBM — called for their employers to reject any artificial intelligence development contracts with the DoD. Google's once-secret DoD initiative, “Project Maven,” entailed automating the process of drones choosing a target. The project sparked employee outrage and Google opted to not renew Project Maven for 2019.

Last week, over two thousand artificial intelligence developers, researchers, and thinkers — including Elon Musk, Google DeepMind co-founder Demis Hassabis, and Google Machine Intelligence head Jeffrey Dean — signed a Lethal Autonomous Weapons Pledge, vowing to keep artificial intelligence out of the decision to murder in a military context. But top U.S. military drone manufacturers such as Northrop Grumman, Boeing, General Atomics, and Textron did not sign on to the pledge. Basically, these companies could just purchase and deploy artificial intelligence created by a company or academic institution that didn’t sign the pledge.

However, there’s a diverse market of companies ready to pounce on military opportunities in order to help their businesses grow — even before their products are available to the public. Since 2015, the DoD has used Defense Innovation Unit Experimental division (DIUx) to issue contracts to private companies to develop pilot programs for new and emerging tech to be used in a military context. Recently, DIUx discreetly funded two Silicon Valley companies — Kitty Hawk and Joby Aviation — about $1 million each to develop their automatic, self-piloting air taxis (that can hold one idle passenger) for still-undisclosed military purposes.

The 2019 NDAA is on track to be signed into existence by Trump between now and October, the beginning of the 2019 fiscal year. Per the act, the artificial intelligence commission is mandated to appoint a senior DoD official within one year of the act being signed. We’re just over a year away, or perhaps less, from a centralized DoD system aimed at researching and deploying artificial intelligence for warfare.