One day the co-pilot might be an AI U.S. Air National Guard photo/Airman 1st Class Robert Cabuco

Would you trust an artificial intelligence to fly an armed combat jet? Software called ALPHA is being used to fly uncrewed jets in simulations and could one day help pilots in real-world missions. ALPHA’s developers claim, that unlike many AI systems, its behaviour can be verified at each step, meaning it won’t act unpredictably.

ALPHA was developed by Psibernetix in Ohio as a training aid for the US air force. It was originally designed to fly aircraft in a virtual air combat simulator, but has now been turned into a friendly co-pilot system that can help human pilots using the simulator.

Many popular AIs are based on deep learning neural networks that mimic human brains. These use layers of computation that are hard for humans to decipher, which makes it tricky to work out how a system reached a decision. ALPHA is different. It uses a fuzzy logic approach called a Genetic Fuzzy Tree (GFT) system.


“Rather than emulating the biological structure of the brain, fuzzy logic emulates the thought process of a human,” says Nick Ernest, CEO of Psibernetix. He says this makes it easier to work out each step the system took to produce an outcome.

Following the rules

The system classifies data in terms of English-language concepts, such as a plane “moving fast” or being “very threatening”, and develops rules on how to behave in response. For example, ALPHA can decide whether to fire a missile or take evasive manoeuvres based on a combination of how fast and threatening an opposing aircraft appears to be.

By breaking the decision-making process down into many sub-decisions like this, ALPHA avoids the computational overload that can slow other fuzzy logic systems.

“Without the GFT structure, ALPHA would not be able to run or train, even on the largest supercomputer in the world,” says Ernest. “With it, however, it can run on a Raspberry Pi and training can occur on a $500 desktop PC.”

Like a human pilot, the friendly version of ALPHA takes instructions from its commander and then decides how to carry them out. It will only ever fire when authorised.

“We created the ability to have human overrides at every single level in ALPHA’s logic, and it is perfectly loyal to commands,” says Ernest.

Life and death decisions

Perhaps the most important aspect of ALPHA is validation and verification. This process gives an assurance that the software can be trusted to do the job as it is supposed to – a vital factor when dealing with life and death decisions.

Working with Psibernetix, US air force members at Wright-Patterson Air Force Base in Ohio used an automated model checker to prove that part of ALPHA’s code that determines evasion tactics would work as expected in all situations and it would not, for example, dodge into the path of one missile while avoiding another.

But Noel Sharkey, an emeritus professor of AI at the University of Sheffield, UK, is doubtful that ALPHA is as transparent as claimed.

“The authors claim that their learning device will be easier to validate and verify than neural network learning systems,” says Sharkey. “This is essential for compulsory weapons reviews, and yet it is notoriously difficult for even relatively simple programs.”

Ernest says that while the current version of ALPHA is geared towards a simulated environment, there is no technological obstacle to a later version piloting an uncrewed aircraft or co-piloting a crewed aircraft.

“Let us see proper scientific testing and evaluation of the idea first before we embark on such a dangerous idea,” says Sharkey.