Moralizing AI: Can We Make Machines That Reason Ethically?

Artificial Intelligence today is still far behind even the most primitive animals in complexity. What happens when we make increasingly more intricate and powerful minds?

Image source: Pixabay

A commonly cited doomsday scenario when talking about runaway artificial intelligence is that it won’t know when to quit. After being directed to perform some trivial task, the super-genius AI will go to extraordinary inhuman lengths to fulfill its directive, such as turning off the international power grid to lower energy consumption.

The Artificial Intelligence of today is not built to moralize. At the moment it serves merely as a tool. And that’s to be expected, because who is looking for moralistic machines right now?

Trending AI Articles:

But then again, Artificial Intelligence today is still far behind even the most primitive animals in complexity. What happens when we make increasingly more intricate and powerful minds?

One of the few examples out there is Mind.AI who believe that this is possible. They’ve built an engine that reasons using natural language rather than reinforcement learning and pattern recognition common in AI today, and hope to crowdsource the engine’s dataset through the use of dApps.

The Mind.AI engine uses inductive, deductive, and abductive logic, which you might recognize as the reasoning constructs humans use every day. They are closing the gap between machine thinking and animal thinking; this is thought to be the only way to create truly intelligent computers.

Nor does their engine run on vast datasets scraped and dressed for machine use — which the developers feel are intrusive. Instead, widely available and gamified dApps will prompt users to provide the engine with natural language snippets they call ‘ontologies,’ that the engine then builds into custom data structures called ‘canonicals.’

Image source: Pixabay

Crowdsourcing data

By building their platform on the AI and artificial intuition-led Velas blockchain, they’re able to incentivize their dApps by allowing users to earn rewards from submitting ontologies. And by decentralizing the whole process, they hope to make the entire approach as transparent as possible.

Whenever the engine makes a moral decision, users will be able to peer into the thought process and see where it went right — or wrong. Think of an automated car; investigators will be able to pinpoint exactly where the software is making moral decisions, and users will be able to train the mind to make the right choice every time.

Of course, the question of what is ‘right’ must be answered. Mind.AI’s engine is clearly capable of making complex decisions, but who is to say whether they are ethical? Through crowdsourcing, users can look inside the machine’s ‘thoughts’ to help them conclude as to whether they agree with Mind.AI’s choices, but just because a human has signed off on a decision does not make it right.

To illustrate just how fragile our moral compasses are, consider the famous Trolley Problem, where somebody must decide whether they are going to save the lives of one individual or many. This is a relevant question with regards to automated vehicles, because there may one day be a scenario where an AI-powered car is forced to decide whether to hit one pedestrian or to swerve out of the way and hit multiple. Most human’s knee-jerk reaction is to save many lives over one because we are naturally quantity driven animals, but a common objection asks whether you can quantify life in such a way. The Trolley Problem is not asking whether many lives are worth more than one; it asks whether life can be valued using extrinsic measurements. Is life only relevant by a measure of how much of it there is, or does life’s relevance come from something intrinsic, something in-and-of itself?

Image source: Mind.ai

There is no conclusive answer. Moral dilemmas like this have plagued law, ethics, and politics for millennia, and it looks like they are set to plague the realm of Artificial Intelligence as well. If humans cannot decide on what to do in situations such as the Trolley Problem, what hope does an AI have?

Well, they actually might have a lot of hope! Think about it like this: for the entirety of our existence, humans have been the only thing on this planet trying to answer such questions; it stands to reason that we are perhaps stuck on how to tackle them. If we can create an Artificial Intelligence which gets a grasp of ethical principals, we will now have a companion who can offer their take on such questions. No matter how hard we try, the mind of an Artificially Intelligent machine will not work exactly like ours (due to their lack of biology), but this might actually be a good thing. Under a moral relativist perspective, it does not matter whether somebody chooses to save one life or to save many so long as they can explain why they made their decision.

And that is what Mind.AI can do. Through its use of canonicals, Mind.AI can clearly explain how it draws its conclusions. So long as Mind.AI’s engine is working within a framework of numerous moral principles (chosen by those who create the AI), it will be acting ethically.

No wonder VC firm REDDS Capital thought they had found something unique in Mind.AI. The company is shooting for that mythic pedestal that no other AI developer has yet to reach: producing a general AI capable of complex humanlike reasoning.

Don’t forget to give us your 👏 !