Robot vs. human: This is the new battle in vogue. Ask Col. Gene Lee, a former fighter pilot and U.S. Air Force pilot trainer, defeated in 2016 by artificial intelligence in an air combat simulation. This specific AI program, even deprived of certain controls, is able to react 250 times faster than a human being. It is one story among many others of how AI technologies play and will play a leading role in operational superiority over the next decades.

I personally choose not to oppose the human to the robot. There is no discussion of replacing human intelligence by artificial intelligence, but it will be essential in increasing our capabilities manyfold. AI is not a goal, per se; it must contribute to better-informed and faster decision-making for the benefit of our soldiers.

AI means unprecedented intelligence capabilities. Crossing thousands of satellite images with data provided by the dark web in order to extract interesting links: This is what big-data analysis will make possible. AI also means better protection for our troops. To evacuate wounded personnel from the battlefield, to clear an itinerary or a mined terrain — as many perilous tasks that we will soon be able to delegate to robots. Lastly, AI means a stronger cyber defense. Cyber soldiers will be capable of countering at very high speed the increasingly stealthy, numerous and automated attacks that are threatening our systems and our economies.

We have everything to win in embracing the opportunities offered by artificial intelligence. This is why the French Ministry of Armed Forces has decided to invest massively in this area. However, we are not naïve, and we do not ignore the risks associated with the development of emerging technologies such as AI.

Hence, we chose to develop defense artificial intelligence according to three major principles: abiding by international law, maintaining sufficient human control and ensuring the permanent responsibility of the chain of command.

To ensure daily compliance with these principles over the long term and to feed our ethical thought, as new uses of AI appear every day, I decided to create a ministerial ethics committee focused on defense issues. This committee will take office at the very end of this year and will come as an aid to decision-making and anticipation. Its main role will be to address questions raised by emerging technologies and their potential use in the defense field.

At the heart of these questions stands an issue that is of interest but also of concern, both within the AI community and within civil society. It comes down to the lethal autonomous weapon systems that some call “killer robots” — weapon systems that would be able to operate without any form of human supervision, that would be able to alter the framework of the mission they are allocated or even assign new missions to themselves.

× Fear of missing out? Sign up for the Early Bird Brief, the defense industry's most comprehensive news and information, straight to your inbox. Thanks for signing up. By giving us your email, you are opting in to the Early Bird Brief.

It is important to know that such systems do not exist yet in today’s theaters of operation. However, debating about them is legitimate. In fact, France did introduce this issue in 2013 to the United Nations in the framework of the Convention on Certain Conventional Weapons. We do wish these discussions to continue in this multilateral framework, the only one that can eventually bring about a regulation of military autonomous systems, as it is the only one that is altogether universal, credible and efficient. We cannot rule out the risk of such weapons being developed one day by irresponsible states, or falling into the hands of nonstate actors. The need to federate with all other nations in the world is even more imperative.

France defends its values, respects its international commitments and remains faithful to them. Our position is unambiguous and has been expressed in the clearest terms by President Emmanuel Macron: France refuses to entrust the decision of life or death to a machine that would act fully autonomously and escape any form of human control.

Such systems are fundamentally contrary to all our principles. They have no operational interest for a state whose armed forces abide by international law, and we will not deploy any. Terminator will never march down the Champs-Elysées on Bastille Day.