The US Department of Defense has formally adopted a set principles to ensure the ethical development and deployment of AI technology for military use.

"We owe it to the American people and to our men and women in uniform to adopt AI ethics principles that reflect our nation's values of a free and open society," Lieutenant General Jack Shanahan, director of the DoD's Joint Artificial Intelligence Center (JAIC), said during the briefing.

The principles boil down to five areas of concern described below that were initially proposed last year:

Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities

DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities Equitable: The department will take deliberate steps to minimise unintended bias in AI capabilities

The department will take deliberate steps to minimise unintended bias in AI capabilities Traceable: The department's AI capabilities will be developed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation

The department's AI capabilities will be developed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation Reliable: The department's AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles

The department's AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles Governable: The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior

The move comes after Michael Kratsios, chief technology officer of the United States and deputy assistant to the President at the White House Office of Science and Technology Policy, drafted a proposal (PDF) to regulate AI applications from the private sector.

What has changed?

Now it looks like the DoD also supports some sort of internal regulation surrounding AI too. So what does this mean for the US military? Well, probably not much really. None of these principles are legally binding in any way. Although the military pledges to be ethically minded when it comes to rolling out AI technology in combat and non-combat applications, it's not clear who or if any other agency is holding them accountable to their principles.

The DoD has made it clear that it hopes to open up and ink deals with companies that have the technical expertise to develop algorithms for warfare. Under Project Maven, Google was initially employed to help the Pentagon build computer-vision software that would automatically analyse and identify useful information in video footage captured by drones, first revealed in 2018.

After the Chocolate Factory faced internal revolt and public criticism, CEO Sundar Pichai decided to can the whole thing altogether. Maybe by adopting ethical principles, the DoD is trying to portray its technical efforts as wholesome in an attempt to woo back sceptical Silicon Valley companies. That strategy probably won't work, however. If anyone knows how to front AI principles to appear more ethical, it'll be the tech giants themselves. ®