Right now, in China, Communist party officials are working with big data analytics and artificial intelligence (AI) experts to launch a new, multi-billion-dollar tool to assign a ‘Trust Rating’ to each of China’s citizens.

Designed to clamp down on corruption, the plan is to generate a social credit score for each citizen based on how trustworthy they are. Automatically, through AI technology, citizens with good scores will benefit; citizens with bad scores could face punishments, blacklisting, and restrictions.

It’s just one example of how artificial intelligence is pervading our lives. The likes of Facebook and Google collect data on our likes, dislikes, cognitive patterns and behavior, even our socio-economic status, every day. AI makes out patterns in the data, makes recommendations, and decisions based on that data.

But who’s in charge of artificial intelligence? How is a ‘Trust Rating’ determined? What impact could AI have on the livelihoods of 1.3 billion Chinese?

Olaf Groth is a professor at Hult International Business School and co-author of the forthcoming book, Solomon’s Code: Power & Values in a World of AI. As part of the World Economic Forum’s global Expert Network, he advises senior business executives on disruptive global trends like artificial intelligence, Industry 4.0, and the Internet of Things (IoT).

In September, he led a TEDx talk—Human Values and Power in a World of Artificial Intelligence—as part of the TEDxHultAshridge series, which highlights the latest innovative research coming out of Hult.

There, he posed the question: “How will we shape and govern the power of AI so that it serves human values and human power?” For Olaf, the future of AI is about maintaining control, ensuring AI and human values align, and harnessing AI’s potential to drive positive societal change.

“Values are at the core of how we as human beings are empowered,” said Olaf, speaking at the TEDx talk at Hult’s executive education campus—Ashridge Executive Education—in the UK.

“What you communicate publicly on social media is only a partial picture. You have dormant values; emergent values; your values change as you stumble through life; you make trade-offs between values every day.

“Society also has its own hierarchy of values explained in political processes. How will that hierarchy conflict with your own hierarchy as you jointly govern AI? Who’s going to govern all of this transparently? It’s a big issue for us to solve.”

Perspectives on the future of AI differ. In Japan, AI solutions are being developed to care for a rapidly aging population. In Germany, AI robots in factories are boosting production nationwide.

The United Nations’ Intergovernmental Panel on Climate Change (IPCC) is using AI to find out which of its models to use to combat climate change most effectively.

But AI can throw up more complex, moral issues. Take predictive policing. In New York, the NYPD is working with big data firms to predict where in the city crime may occur and pre-emptively dispatch officers to those locations.

As Olaf notes, tourists may feel safer, but the communities in question feel stigmatized. How has the AI program reached its conclusions? It can’t say. Now, the US government is looking to develop Explainable AI—using one AI program to explain another.

According to Olaf, some experts—Elon Musk and Stephen Hawking—warn of a dystopian vision of the future of artificial intelligence; robots could eradicate human beings. Others are proponents of technological singularity in which AI and human beings become one.

Olaf wants to create a digital Magna Carta for the age of artificial intelligence, and to put values-based human empowerment at its center. He thinks a new, inclusive, global, multi-stakeholder institution should be set up with a representative congress to govern AI.

In his book, Olaf and his co-author have dubbed this a ‘Cambrian Congress’, focusing on the vast, positive transformational potential of AI for humans, industries and societies.

“If we harness this potential responsibly, it can solve a lot of our problems and fuel significant growth," he noted. “But that’s a big 'if.’”

“AI is about power—the power of technology—and how it relates to and feeds into human power and the power of societal institutions. We need to get ahead and make sense of this before the machine that we created runs circles around us faster than we can count the laps.”

Throughout Hult’s MBA, Executive MBA, and master’s programs, Olaf's teaching, mentoring and researching reinforces Hult’s focus is on innovation and disruption; preparing graduates for the workplace of the future. Topics like big data analytics, innovation, and entrepreneurship, are ingrained in course curriculum.

Across Hult International Business School programs, students can take three Nano Courses—self-led, online courses covering disruptive technologies like artificial intelligence, 3D printing, blockchain, virtual and augmented reality, and the Internet of Things (IoT)—in place of one regular elective.

Graduate students and executives alike can take an immersive course on Digital Futures, a three-day program for which Olaf is program director and which was first delivered in partnership with Ashridge and Ferrari.