Spanish company Acuilae plans to develop a tool that gives morals to AI machines.

A new project 'ETHYKA' aims to teach dilemma analysis and decision-making to AI machines to enable them to make fair and good choices.

The project developers hope to attracts more investors so that ETHYKA can become industry standard in the future.



Spanish company, Acuilae, specialises in data analysis, machine learning and virtual assistants — three of the main applications of artificial intelligence. So it's a no-brainer for a company like Acuilae to try and find a solution to the dilemma we now face: who can and should bestow ethics on artificial intelligence?

It's an important question, considering global finance, health systems, the justice system and much more will soon be managed by artificial intelligence — if we're going to leave such crucial decisions in the hands of machines, we have to ensure they make the right decisions and, if possible, even fair and good ones.

That's where ETHYKA comes into the picture.

"Born from the desire to research and learn," according to Acuilae's CEO, Cristina Sánchez, ETHYKA is a project that looks to give AI morals so actions they perform will be based on ethical data.

An expert in statistics, computer science and data science, thus far she's had a team of just four to help move the project forward and is now looking to attract investors through a funding round.

The applications are potentially limitless, especially seeing as no one has developed machine ethics so far. So, right now, ETHYKA's main goal is to become an industry standard. If successful, it will provide a stable framework for developers to implement ethics into their own software or hardware projects. And of course, there will be significant time and cost savings through machine moralisation.

"It can be sold as an SD Card to developers who want to implement it. We have also thought of offering cloud services for specific queries and, of course, modules for transport equipment such as autonomous cars, health automata, virtual assistants, etc", Sánchez told Business Insider.

According to Acuilae, it's already been proven that robots and virtual assistants that interact with humans can be easily corrupted, as was the case with the Microsoft chatbot that began making racist comments. Machines — like children — learn by observation and repetition, trial and error. If they hear vulgar expressions, they will eventually incorporate them into their database and, in future, may also imitate any manner of undesirable behaviour.

How to programme ethics into a machine

As with many other AI machines, the function and structure of Acuilae's machines tries to emulate what goes in within the human brain. For this reason, ETHYKA is structured in three layers that correspond to the three stages of conventional knowledge theory: data acquisition, dilemma recognition, dilemma analysis, and decision-making.

In its structure, Ethyka mimics the decision-making process of the human brain. Acuilae

Mimicking our own bodies, the machine will collect data from sensors, cameras, apps, frameworks, and more. What happens next, however, doesn't involve anything like the central amygdala, the part of our brain that regulates emotions; ETHYKA will pull a catalogue in the cloud — made up of a library of terabytes and terabytes of related expressions and actions — to be able to identify what it's facing.

"I always find this apocalyptic view of artificial intelligence ridiculous. I don't think it's going to be a risk for humanity as many people like to think, however it's certainly important these machines learn to recognise an ethical dilemma within the natural language that will be used to communicate with them," Sánchez told Business Insider.

They're currently compiling a library in the cloud that will act as a frontal lobe — the most evolved part of the brain, responsible for sifting through and weighing up our emotions to make decisions, based on the ethical and moral values of each individual. ETHYKA is where all the data and filters will be stored.

In this way, ETHYKA will be able to incorporate different configurations, with probative, autonomous and etheronomic, theological, evolutionary, civic and, of course, professional ethics — which can also vary according to profession.

In its second phase, Acuilae's ethical module will be capable of discerning which principles to apply to decision-making — an area that causes confusion for humans — and in its third phase, it will use deep learning to generate various predicted outcomes for future choices, based on previous decisions.

The module will be able to operate according to three decision criteria determined by the amount of information provided and the control over the final situation. The question is now, what will it take for ETHYKA to become a standard in the computer industry?

"It needs to be done in Germany, China or the US rather than Spain," said Sánchez, who has immediately begun recruiting a larger team of "multidisciplinary experts" to bring the Spanish ethics module to the global artificial intelligence market.