AI’s influence over our world will continue to grow, but it remains a technology in its infancy – however, missteps along the way shouldn’t detract from the greater good it promises, says Lenore Kerrigan, country sales director for enterprise information management group, OpenText.

In Davos 2018, the great and powerful debated the ethics of Artificial Intelligence (AI), with UK Prime Minister Theresa May launching the UK’s Centre for Data Ethics and Innovation. The aim of this advisory body is to work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative deployment of AI.

The move echoes a 2000-year-old debate by Roman poet Juvenal who asked, “Who guards the guardsmen?” The question probed at the very heart of power and its abuse, because if powerful people dictate how the world works, who keeps them in check?

It’s a familiar thought experiment that introduces questions about the state, class, wealth, the media and more. While the wealthy and powerful continue to hold significant influence in world affairs and the ‘common good’, as the Davos World Economic Forum gathering demonstrates, they now have a challenger in AI and machines.

As super smart computers make complex decisions on your behalf, often without human oversight, the Juvenal question for the 21st Century should be ”Who will keep AI in check?” Humans, or the machines themselves?

We are still some way from James Cameron’s SkyNet in The Terminator. In truth, AI’s influence on the human condition isn’t yet much of a consideration. It hasn’t encroached significantly into our lives at present outside household helps such as Alexa. That isn’t to underestimate its importance, however.

AI has proved exceptionally useful in laboratories, analysing very large data sets to find patterns in areas such as healthcare, or making rapid trades for financial institutions, and media buys for brands. As the volume of data and information created continues to grow — often generated by machines themselves — we will become more reliant on AI to find meaning, the digital equivalent to a needle in a haystack.

Current trends show huge increases in video streaming thanks to services such as YouTube, Netflix and Amazon Prime, and a transition from 4G to 5G mobile broadband awaits us. But this largely human generated content is in fact a relatively small step-change in the growth in the volume of data.

The bigger challenge will be the introduction of millions, then billions and eventually trillions of connected ‘edge’ devices that will virtually map the physical world in every detail and in real time.

Some of the earliest Internet of Things (IoT) devices were developed decades ago by the oil and gas industry, giving engineers immediate feedback on pipeline issues or production bottlenecks. When pipelines are hundreds or thousands of miles long, or under the sea and impossible to reach, knowing precisely where to find problems saved considerable time and money.

Today edge devices constantly produce data, which could be as simple as a location (a fixed GPS location, for example), or as complicated as the weather (wind speed and direction, barometric pressure, humidity, precipitation and temperature). Multiplied by trillions of devices, this staggering escalation of data will eclipse anything produced today.

According to Statista, the number of IoT connected devices installed worldwide will rise from just over 20 billion to more than 30 billion by 2020 and then 75.4 billion by 2025.

These edge devices, coupled with yottabytes of data, will enable technologies such as driverless cars — themselves producing gigabytes of data, per vehicle, per day — to become a reality. And AI will be key to managing what traditional computing can’t. AI systems will be the overlords of this new connected age.

They will decide what’s relevant, what’s not, what’s alarming, or what should be ignored or deleted.

Are AI decisions right?

Already, experimental AI and advanced robotics have entered the ‘uncanny valley’ where the lines between humans and anthropomorphic robots becomes blurred. For example, Microsoft’s Tay AI chatbot was shut down after only 16 hours after tweeting inflammatory and offensive posts.

And when a researcher at Boston Dynamics kicked a robotic ‘dog’, the Twittersphere lit up with complaints.

But as AI technology continues to advance, it will inevitably advance around defense and armaments. This move into the defense industry will test nations’ data ethics units like never before. Can Asimov’s Three Laws of Robotics from the 1942 short story Runaround still apply nearly a century later?

Asimov believed robots would be integrated into society, constrained and controlled by his three laws, where robots are essentially benign, there to protect and serve humans. But Asimov also considered robots to be essentially hardware — physical things that move around like robotic butlers.

Impressive as today’s agile robots are, they remain clumsy and limited. Robots today are more likely to be software, relying on complex algorithms, machine learning and AI that Asimov simply couldn’t imagine.

We are racing towards a future where machines will become trusted and sage advisors. When AI really understands us, and the data and information that surrounds us, it can help to manage our lifestyles and improve them, augment our poor decisions with better ones, and make us smarter, healthy and more productive — very much in line with Asimov’s laws.

But we need to guard against the alternative. A world where Asimov’s laws are trampled on — either through malicious intent or just unfettered ambition. How we keep AI in check is partly determined by how we choose to use it in the first place.

By Lenore Kerrigan, OpenText, Country Sales Director

Read: Artificial Intelligence will boost global GDP 14% by 2030