A Q&A with HPE's Dr. Eng Lim Goh on AI, ethics, and the future

Dr. Eng Lim Goh, vice president and chief technology officer for high-performance computing and artificial intelligence at Hewlett Packard Enterprise, has spent his career considering what machines can do, what they might do, and what they shouldn’t do. As AI has become more prominent, he has been asked to play the role of futurist by the customers and partners he deals with daily.

Goh, like most scientists, is unwilling to roll out any sort of crystal ball. But given his long familiarity with computer graphics, machine learning, analytics, and data, he is in a good position to talk about the different viewpoints on the subject. In this Q&A, he outlines the promises and concerns introduced by the ongoing uptick in AI adoption.

What do you think about the concern that AI will cost people jobs?

In the near term, the effect will be mostly machines augmenting and enhancing humans rather than replacing them. However, humans, economies, and schools need to adapt quickly.

Yes, there will be job losses, but there will likely also be new types of jobs created. However, we need to upskill and become more specialized.

I saw this recently when I did a co-design session with a customer, one of the world’s biggest manufacturers, which has tens of thousands of industrial robots. Today, the company’s robots operate safely away from humans and mostly by following predefined rules. With the advent of machine learning from examples instead, I asked the customer if these potentially more capable robots will replace more humans. The answer was, in fact, the opposite. They plan to have a number of such future robots working alongside and assigned to every human expert craftsman, learning from and then mimicking her or him. The goal is to change the market by offering more customized products at affordable prices that their customers want, instead of only the current mass-produced ones.

The much-talked-about autonomous vehicles are another example. There are five levels of autonomy, according to the Society of Automotive Engineers (SAE). Today, the most advanced production cars are at around Level 3, where constant human participation is still required. Some say we are up to Level 4, but even Level 4 requires humans for some road conditions. Only Level 5 is fully driverless, and we’re not there yet.

So, when it comes to autonomous vehicles, humans are still needed, albeit increasingly augmented. Level 5 automation will come, but it is quite possibly as much as a decade away, according to most people in the industry.

Need information to help you with your artificial intelligence deep learning journey? We have a Dummies guide for that. Get it now

Not completely relatable but perhaps still useful is AI’s impact on commercial aircraft. Autopilot was invented 100 years ago and has been in commercial aircrafts for 80 of those years, increasing overall safety tremendously. Yet, for various reasons, autopilot today still does not mean pilotless. Currently, autopilot is used only during cruising and landing, and not during taxi and take-off.

Even then, the job market may still need a cushion in the form of further reduction in working hours. However, we have seen this before. For example, since 1900, automation has resulted in a reduction of annual working hours, from 3,000 to about 2,000 hours. Yet economies and workers have adapted. I have seen creative and productive economies already reducing to an effectively 4.5-day week atop already enjoying more vacation days than average.

A common public view of the danger of AI is that it may soon take over. What is your opinion of the possibilities and dangers of so-called superintelligence?

There are a few ways to look at this: in the near term, medium term, and long term.

Near term

Today, we have AI systems capable of doing a single task well: artificial specific intelligence. Let’s move from the commonly discussed IQ to EQ. To get to superintelligence, machines may first have to advance to artificial general intelligence, then to sentience, consciousness, and eventually to self-awareness. This latter advancement is a high bar. Even we humans only become self-aware after about 18 months of age—that is, the age at which we recognize our faces in a mirror and touch our faces instead of pointing at the mirror.

In the near term, again, I see all this tech augmenting human efforts, not replacing them.

And here’s the thing: The future, as they say, remains unwritten. What we do now determines what the future is.

Medium term

To be effective, we must work to understand the differences between our brains and artificial intelligence. For the latter, let’s use the machine learning method modeled after the brain, the artificial neural network.

In terms of structure, the brain is connected hierarchically while typical artificial neural networks are monolithic. Perhaps this is the reason why when we make decisions, we naturally also apply judgement that comes from other branches of hierarchy in our brains.

The brain is still about a million times more complex than the biggest artificial neural networks today. Our brains have lots more connections. A connection in an artificial neural network is currently represented by only one number. However, researchers at the Blue Brain Project, our customer, have shown that each brain connection needs as many as 20 separate differential equations to model.

This is, perhaps, why we humans currently need way fewer examples to learn something new. Consequently, our brains are also much more energy efficient. I jest with my engineers when I buy them pizzas that a slice should power the brain for three hours.

In terms of operations, our brains work in a subtractive way, while artificial neural networks do so in the opposite, additive way. When we were young, each of us had many more brain connections than we do now; unused neural connections are pruned. Also, two people in a noisy crowd can still carry on a conversation because our brains constantly filter in order to cope. (As a side note, having to filter noise seems to incur a counterproductive mental load on us. In one profound study, when the airport was moved to another city, the grades in the school in the former city improved while that in the new city dropped.)

Long term

But let’s say that, in the long run, machines can achieve superintelligence. When can we expect to see superintelligent machines?

Given that artificial intelligence is currently so much less complex than the human brain, says Turing Award winner Geoffrey Hinton, the soonest we could see anything resembling superintelligent machines would be 30 years from now. In fact, going by my earlier estimate of a million times complexity difference and taking into account Moore’s Law, it works out to also be about 30 years.

The next question is: Will complexity alone give rise to self-awareness? When I gave a talk at Princeton University, our customer, I learnt of Nobel Laureate Phil Anderson’s paper on emergent properties, which describes novel properties that arise from—but are not fully explainable by—their foundational structures. We see that each jump from one level of complexity to another can produce new properties.

Science, with subjects at different levels of complexity, is a good example. You cannot fully explain physiology by biology alone. In turn, you cannot fully explain biology by chemistry alone or chemistry by physics alone. So, as complexity increases, superintelligence could emerge as a yet unexplainable function of the underlying technology. Thanks to emergent properties, I think there is a chance that machines can become self-aware.

So, let’s say a machine acquires superintelligence. Should we be concerned? Will it turn antagonistic toward humans? This is indeed a possibility and suggested by renowned thinkers.

There’s also another possibility. I call it the combined immortal alien effect. If you are a self-aware computer, you are alien in the sense that you did not learn the way humans learn; you did not gain your self-awareness as a function of human biology but rather increasingly through self-reinforcement learning. You’re also functionally immortal—just back yourself up.

So, optimistically, there’s a possibility that such an immortal alien, being not as concerned about self-preservation, may be benevolent rather than antagonistic.

Do you have any concerns about bias in AI?

An AI model is only as good as the examples used to train it—its training set. If that data contains bias, so does the model and so does the resulting AI’s predictions and decisions. But the positive side is that we’ve already begun to work to counter that concern.

Bias mitigation has become a focus throughout AI. Companies like Microsoft and Google have begun working on bias-checking software that can warn us when they are on the brink of making a wrong decision. This is like using rules again but, this time, to check the predictions and decisions of the machine that learnt from examples.

But automation of bias detection, just like the other types of automation, cannot thrive without our regular involvement. For instance, there is ongoing work toward a certification regime for AI. We’ve already adopted certification in many aspects of technological development, and we certainly also need it to counter bias in AI. Certification will check if you have in place a fair structure to your training data that reflects the people you serve.

In line with my point before, regarding first understanding the differences between natural and artificial intelligence, I have also been studying the potential differences in bias between the two. Although AI picks up our biases present in the data we feed it, we humans also have a different class of biases that are transient; that is, we may have this bias now but not, say, after lunch.

Two such examples are contrast and recency effects. A physical demonstration of the contrast effect is when we put our hands in almost hot water and then next in warm water, the warm water feels deceivingly cold. With the recency effect, it has been shown when humans are presented with a list of items, on average, we remember the last items best than the first items, with the middle items impressing upon us least. Having complementarily different biases can also mean, working together, we will be not just faster and more productive but also fairer.

In summary, while machines are increasingly relied on to make correct decisions, we need to be there to make sure that it is also the right decision. What’s correct may not always be right. We are the last judge.

Related links:

A Journey of a Thousand Miles Starts with a Single Step

Bringing intelligence to storage, an AI story

Power deep learning AI in production