On New Year’s Eve 2019 at 10:00 AM EST, an artificial intelligence algorithm sent a warning out to medical professionals around the globe about a severe respiratory infection affecting Wuhan, China. In addition to identifying the source of the outbreak, it then used global airline ticketing data to correctly predict that the virus would likely spread to Seoul, Bangkok, Taipei, and Tokyo.

By scouring vast amounts of data including public bulletins, airline travel data, government documents, news reports, and other sources, the AI, developed by Canadian tech firm BlueDot, was able to identify the outbreak of the Wuhan coronavirus before any other organization and issue a warning to health agencies around the world.

The World Health Organization, by comparison, didn’t issue their warning to the public until nine days later.

As of writing, over 20,000 people are infected with the virus, but governments around the world are working hard to prevent a pandemic, thanks in no small part to the warning.

AI and BI: The End of Human Decision Making in Business?

It’s easy to see why there’s so much excitement around AI and related fields like machine learning and automation. The early identification and warning of the Wuhan coronavirus is a remarkable example of the potential AI has to make better decisions than any human counterpart, and act on those decisions faster as well.

In the business intelligence world, AI is often heralded as the future, even a BI killer. After all, why bring humans into the process at all if AI algorithms can make better decisions?

Augmented analytics, using AI and ML to help surface relevant information, is right around the corner, and some even claim human-out-of-the-loop AI systems are just a few years away.

But is AI going to kill the need for BI? Will we ever entirely remove humans from business decisions, and more importantly, should we? Can AI make better decisions for people than people can?

The AI Black Box

AI—and other technical fields—are advancing rapidly, faster than humans can cope with it. The technology behind modern AI like neural networks has grown so complex that even AI experts do not fully understand it.

Elon Musk once described the development of artificial intelligence as “summoning a demon” — interesting words from the man who wanted to build a completely automated “alien dreadnought factory.”

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.” – Elon Musk, CEO, Tesla

Even when humans seem to be in full command, things don’t always go as expected.

Recently, Facebook developed AI-powered chatbots to negotiate prices with each other. After some simple initial conversation, a funny thing happened: The bots began to make up their own language that humans couldn’t understand.

When we don’t know what AI is doing or how it makes its decisions, we don’t know who to blame when things go wrong. A billionaire investor from Hong Kong is suing a salesperson for persuading him to trust another company’s AI that lost him $20 million dollars in bad trades. But does the designer of the AI hold some responsibility here too? Is the salesman the only one in the wrong, or is the company that built the AI also to blame? Investing is inherently risky, so is anyone really at fault?

Black-boxed AI systems not only make it difficult to draw lines of accountability, they’re nearly impossible to troubleshoot when things go wrong. If an AI system makes or influences a decision that costs a company millions of dollars, future failure can’t be prevented when the steps that led up to that decision are unknown. This is already proving to be a huge liability for companies employing AI. Legislation is being put forth in the UK to penalize companies with multi-million pound fines if they can’t adequately explain how their AI systems make decisions.

Bias In, Bias Out

There are a few pervasive myths around artificial intelligence: that decisions made by computers are entirely fair, that data is always accurate, and that AI is immune to the biases and intellectual fallibility so intrinsic to humans.

But the truth is that AI algorithms are as biased as the humans who develop them. AI is a multiplier and can often magnify biases that already exist in society to extreme degrees:

Recently, AI technology at Google was proven to promote racial prejudice. Searching for “hands” yielded images of primarily caucasian hands— while searching for “black hands” resulted in predominantly derogatory depictions.

Last year, some Twitter posts went viral accusing Apple’s new credit card approval process of discriminating against women. They’re now under investigation for gender discrimination.

Amazon stopped using a hiring algorithm after finding it favored applicants that used language like “executed” or “captured”— words more commonly found on men’s resumes.

A criminal justice algorithm used in Florida mislabeled African-American defendants as “high risk” at nearly twice the rate it mislabeled white defendants.

What makes bias worse in AI systems than humans is that this is bias deployed at scale. When these systems discriminate, it affects more than just one isolated location. Things can get out of hand very quickly, as Microsoft found out a few years back.

In 2016, they developed TAY (Thinking About You), an AI-powered chatbot that was supposed to engage in real conversations with people over Twitter. It sounds like a fun idea, right? Just 16 hours after release, TAY started spewing racial epithets and profanity and was taken offline.

In this case, it was a group of pranksters and a few nefarious characters that exploited the bot and deliberately caused it to go off the rails. But AI used for critical business decisions could just as easily fall victim to hackers, competitors engaging in corporate sabotage, or people just bent on causing mayhem.

A Double-edged Sword

When companies deploy AI systems at scale, they become targets with multiple attack vectors. What’s perhaps most worrying, is that there’s a new generation of hackers that are going beyond merely breaking into databases and learning how to manipulate the AI algorithms themselves. Some are even developing their own AI to assist in their attacks and outsmart security measures.

Even the most sophisticated AI can be fooled in ways humans can’t. Manipulating a few pixels can cause AI to see a baseball where one doesn’t exist, a mistake even a toddler wouldn’t make. An artist recently created virtual ‘traffic jams’ in Berlin by pulling a wagon full of second-hand smartphones!

Source: evolving.org

Those that are familiar with various AI systems could even engage in data poisoning, manipulating the source datasets to trick systems and trigger unnecessary or unwarranted responses. It’s a threat that’s so dangerous that even the Pentagon is trying to develop countermeasures that mimic animal immune systems to combat it.

World-renowned physicist Stephen Hawking claimed AI could be our worst mistake. “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.”

Weaponized data or a compromised AI system, even if just used in the context of BI, could deal a potentially devastating blow to an organization.

Those Pesky Black Swans

AI, at least for the conceivable future, is based on human knowledge. Its repertoire of data to draw from, including parameters and conditions, are based on recorded and archived information. The problem lies in the fact that the future is full of unknowns.

Author Nassim Nicholas Taleb put forth the theory of Black Swans: rare, unexpected events that have the power to change society for better or worse. Black Swan events are teachers that directly affect and evolve human knowledge. They occur regularly, yet are unpredictable, recalibrating our frames of reference for the world around us.

The Fukushima disaster is a great example. The Japanese nuclear facility was built to withstand earthquakes no greater than 7.9 on the richter scale because larger quakes were not anticipated and hadn’t previously occurred in that part of Japan. But Black Swans are inevitable. In March 2011, the largest earthquake in Japanese history (with a magnitude of 9.0), along with an equally devastating tidal wave, struck the plant and unleashed the most significant nuclear disaster since Chernobyl.

By definition, it is impossible to anticipate and difficult to cope with the unexpected. Humans, however, have demonstrated themselves to be the most adaptable species in history. Everything on the planet, even the weather, seems bent on killing us, yet we continue to thrive. Is it even possible to instill that adaptability into our AI creations? Probably not.

Black Swans present a unique problem for artificial intelligence. AI systems are notoriously bad at dealing with abrupt changes in an environment or unexpected variables. The AI reaction to unforeseen events could be totally wrong—and even dangerous.

We saw an example of this with the Viking Sky cruise ship disaster that happened last year. Rough seas caused oil to slosh around in the tanks, causing onboard sensors to signal there was no oil incorrectly. The automated system then triggered a complete shutdown of the engines in perilous waters. A dangerous air rescue effort had to be launched to bring all 479 passengers and crew members to safety.

Positive Black Swans — like new trends or discoveries— present an even more puzzling scenario for AI. It’s unlikely that AI would be able to capitalize on these opportunities as well as humans can.

Could an AI system have developed a character as wildly popular as Baby Yoda?

Would AI employed by Puma have realized paying the world’s greatest soccer player to tie his shoes during the middle of the World Cup was a great way to gain massive publicity?

Would it have made sense to any AI system to bring back Steve Jobs to take the helm of Apple after being fired years earlier?

It turns out that humans are quite good at dealing with the unexpected and making brilliant decisions that don’t always align with logic, evidence, or even common sense. The unique insights that only humans can derive through business intelligence holds real, tangible value for companies that can’t be overestimated.

The Power and Brilliance of Human Curiosity

Every company has access to one of the earth’s most powerful resources. Still, very few extract its full value: human capital or the skills, knowledge, and experience possessed by a company’s human employees.

Cultivating creativity and curiosity in their workforces results in quantifiable gains for companies. Organizations scoring highest in McKinsey’s Creativity Score performed better than peers in nearly every category:

67% had above-average organic revenue growth

70% had above-average total return to shareholders (TRS)

74% had above-average net enterprise value or NEV/forward EBITDA

The data is clear: Creativity drives innovation and revenue growth. And research proves that this isn’t limited to a few select individuals. In fact, adding more humans into the mix and increasing diversity improves outcomes even further.

Diverse, inclusive teams make better business decisions 87% of the time. They are more innovative, examine facts more closely, and are much more creative.

Creativity is something AI still has yet to master. It can mimic human creativity pretty well, but it can’t think metaphorically or incorporate outside context. We’ve trained AI to create paintings that look like Picassos and write music that sounds like Schoenberg’s, but the results are novel, not creative. Creative work differs from novel work in one critical way: meaning. AI is good at analyzing what exists, finding patterns, and replicating those patterns but it still can’t create anything truly new and meaningful.

Steve Jobs defined creativity as “… just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while.”

Our past experiences, even (and maybe especially) those outside of our particular roles or industries, give us more creative strands we can try tying together. And bringing together more humans into the conversation, each having had their wide array of experiences, increases an organization’s innovative power exponentially.

Many technological advancements in BI are limiting or abstracting humans from the decision-making process. Like our muscles, if we don’t exercise our decision making skills, they atrophy. It’s already affecting some professions. An over reliance on automated systems has resulted in a generation of pilots that can’t fly very well. Replacing BI decision making with AI algorithms will have a similar effect.

As a species, we’ve had eons to practice and refine our judgment making capabilities. AI and ML systems that are trained for weeks or months just don’t have the vast amounts of context and experience humans do. And humans bring certain things to the decision making table AI never will, qualities like compassion, morality, risk-taking, altruism, and ethics.

It’s true that in recent years, there have been several attempts to imbue AI systems with codes of ethics. But whose ethics? Ethics and values differ significantly from culture to culture. Ethics can also change over time. Who is responsible for updating those AI systems to change along with them?

Google defines ‘intelligence’ as the ability to acquire and apply knowledge and skills. Knowledge, however, comes in many forms. Explicit knowledge like facts and data are easily transferable, but our most impactful insights often come from our collected tacit knowledge. These are things we may not even realize we know and are much harder to codify and articulate, let alone convert into an algorithm.

Harder still is the quality of wisdom. It’s a fuzzy word with all kinds of esoteric connotations. While there exist many different opinions on the exact definition of the term, nearly everyone agrees it’s a quality gained through life experience (i.e., living). AI systems, no matter how advanced, are still just working with data. Their ‘experience’ could hardly be defined as ‘living.’ Whether or not an AI system could truly be wise then is up for debate.

Lastly, one of the great benefits of AI decision making, a lack of emotion, is also its Achilles heel. The best decisions are not always rational ones. People that have sustained damage to the parts of their brains that are responsible for emotionmake poorer decisions across multiple aspects of their lives. Science is only just beginning to understand the role emotions play in decision making, even business ones.

Bring More People Into the Data Conversation

AI will have a place in business intelligence and in our lives in general. Advancements, like augmented analytics, will continue to get better at surfacing relevant insights to people so they can make the most informed decisions. AI will allow us to focus on higher-value work and force us to redefine our roles as it handles more of the tedious or mechanical parts of our jobs. These are exciting advancements, and we’re eager to see them develop further. But there is a huge opportunity right now for businesses to get the full value out of their data and benefit from the imperfect, irrational, and beautiful power of human curiosity. The key is lowering the technical barrier to entry and allow domain experts to participate in the data conversation directly.

Only by sharing a common language can companies extract the most impactful data insights. This approach is exactly the mission Sigma has chosen to undertake.

Would you like to learn how you can empower all of your people to harness the power of data for your business?

(This post originally appeared on sigmacomputing.com)

via Technology & Innovation Articles on Business 2 Community http://bit.ly/2vpcudN