My fiancé wants me to give up my cushy six-figure job to work at his landscaping company. Should I ask him to pay me a salary?

Your Digital Self

Opinion: Will artificial intelligence deliver an android that works as your personal assistant?

The synthetic android in “Meet Walter” offers a vision of the future

I’m a huge fan of the original “Alien” franchise, largely because of Sigourney Weaver’s performance and the crucial part technology plays in its lore.

Not only does this tech allow humans to traverse immense distances to reach alien-infested worlds, but it provides them with androids: perfect robotic assistants so advanced that it’s extremely difficult to distinguish them from human crew. In fact, they’re superior to their human companions in many aspects, from their superhuman strength to their refined motor skills.

As a part of promoting the upcoming sci-fi horror film “Alien: Covenant,” (in theaters May 19), 21st Century Fox unit FOX-0.49% Twentieth Century Fox released its branded short film “Meet Walter,” starring Michael Fassbender. It introduces Walter, the latest synthetic android, with intelligence powered by AMD’s AMD-3.58% Ryzen and Radeon processors and manufactured by the film’s fictional corporation, Weyland-Yutani.

This got me thinking. What really IS the future of AI? Where is AI now, and where is it heading? How close are we to having Walter-like androids help us with our daily chores? I spoke with Mark Papermaster, AMD’s chief technology officer and senior vice president of technology and engineering, about these questions. This interview has been edited for length.

Q: What do you think about the modern AIs and their applications?

A: The overall field of machine learning, including AI, is taking a fascinating, but maybe not unexpected, direction: solving the world’s “big problems.” How do we get more people where they want to go safely with autonomous driving? How do we increase the throughput and reliability of our food supply chain with autonomous shipping? How can we make people healthier by analyzing medical problem sets so large that no human can reliably contemplate it? How do we better understand and improve our climate with planet-scale data analysis? AI may not be able to address every problem, but there are definitely immediate areas where we can put it to use.

There is so much data out there today, generated by the plethora of sensors and Interet-of-Things apps that pervade our work and homes. Over the next few years we’ll see machine learning help us better understand all of this data, make it useful and then ultimately act on it in new and exciting ways.

Q: What are the biggest challenges AI faces?

A: Classification is where AI began. How do humans know that a rose is a plant, and a tree is a plant, but a tree is not a rose? We make these sorts of casual categorizations and relationships all the time, but teaching a computer program to do this quickly and automatically was challenging, but not impossible.

Then we taught computers to infer based on prior learnings, reach a conclusion and then act on it — and then continue to repeat that cycle to achieve more intelligence.

CIO Network: AIs Are Getting Smarter (1:42) Artificial Intelligence and Robotics Chair at Singularity University Neil Jacobstein talks about some recent achievements where AIs have been able to solve complex problems. He speaks with WSJ's Scott Austin at the CIO Network in San Francisco.

Now the real challenge is generating enough compute horsepower to do all of the calculations, training and inference so that a car can drive itself without human assistance, or we can even think about creating an entity as capable as Walter in the film. To reach this level, we need about 100 million times the compute acceleration than we have today.

If we are going to reach this goal, we must also begin to create AI systems that achieve reliable and useful results with the same kind of efficiency as the brain. When we look at a tree, we instinctively know that it is a tree without going through the approximately 100 billion calculations that a typical AI system does today to reach the same conclusion. When a human “learns” a new concept, it does so with increased efficiency of neural activity. Otherwise we would be constantly overwhelmed with “data and computation clutter”.

The last big challenge is how to achieve AI expertise. When humans learn to drive, expertise is improved with practice and exposure to a wide variety of scenarios that sharpen the skill level. In the same manner, we want AI systems to improve over time and experience.

Q: What issues — with both software and hardware — need to be resolved for AI to become closer to reaching the level of a perfect digital assistant, and then maybe a synthetic companion?

A: The Holy Grail of AI is, perhaps, a digital mind that functions like an organic one. In the near term, AI is focused on making constrained tasks much more productive, where there’s a known set of inputs and a desired outcome.

Autonomous driving is a perfect example. There are a lot of variables to consider, how fast the car is going, how much distance between this car and the next, what’s happening in the periphery, etc. In this instance, we are not replacing the human, we are assisting humans to help them have a more safe driving experience. To achieve a companion like Walter, we need massive amounts of compute power that don’t exist today.

One the software side, developers will need to adapt and evolve software to take advantage of the compute power and architecture and features that will be developed. We are in the midst of a rapid evolution of the algorithms driving machine learning. New software frameworks are being developed to more easily utilize these algorithms. In tandem, the CPU, GPU, and specialized device compute chip capabilities are advancing enormously to meet the appetite of these algorithms to train more quickly, or infer results on the fly.

Art By Artificial Intelligence: AI Expands Into Artistic Realm (4:24) Can machines make art and music that moves us? Engineers and artists are testing that notion with an array of new artificial intelligence that is expanding the boundaries of how imagery, music and videogames are created. Image: Adele Morgan/The Wall Street Journal

Q: How invested in AI development is AMD? How are your processors specifically optimized for developing AI systems?

A: AMD has been focused on the compute engine aspects of machine learning. We are developing high-performance compute engines and enabling CPU and GPU processors to support the current and evolving AI algorithm models. To make application development efficient and more affordable, we are making the software enablement open source to facilitate the community at large to speed application development.

We are inspired by machine learning and see an infinite need for advancement. High-performance GPUs and CPUs have to evolve in sync with the rapid advance in machine-learning technology. It is critical that these platforms provide both the performance and the efficiency for a wide range of applications.

To begin to address early machine-learning projects, we rolled out our Radeon Instinct product line at the end of 2016. With machine learning, the system is trained using large amounts of data using computationally intensive algorithms. The high computational capacity of AMD GPUs make it a great match for machine learning during the processing of large amounts of data to train neural networks. The AMD Radeon MI25 accelerator will be based on our latest graphics architecture “Vega”, expected to come to market later this year.

We are targeting high-memory bandwidth and large addressable memory capacity, as well as high-throughput core performance with our upcoming “Naples” CPUs making the new products well suited for the deployment of machine learning and can be easily configured with Radeon Instinct Graphics compute or FPGA programmable devices.

Software is the other part of this equation and in order for it to advance as quickly as the hardware, you need an open source, industry standards-based development environment. We’ve given developers more access to our GPU hardware than ever before with our GPUOpen initiative, and we have the Radeon Open Compute software platform to accelerate machine learning, and deep learning frameworks and applications.

Q: What potential applications of AI systems does AMD envision, and do they play a role in the company’s business strategies?

A: AI is now in the process of “mainstreaming”, which means it is becoming easier to leverage AI into more and more applications. Everywhere a business has decisions that can be made by extensive analysis of data to get a known set of desired outcomes or optimizations can now be accelerated by AI algorithms. Like other companies, we will explore areas where we can use AI applications to benefit our business operations and pursue them if they make sense.

In addition, there are many applications in which the promise of AI value is still emerging but not validated yet. We will work with customers and researchers to bring useful solutions to these emerging application areas.

How Humanity Is Threatened by Humans Themselves (12:25) What lies ahead for the human species? Yuval Harari, author of "Homo Deus: A Brief History of Tomorrow," explores potential threats to the human race, as well as the possibility of immortality. In a follow-up to his best-seller, "Sapiens," he and WSJ's Tanya Rivero also discuss the questions posed by the rise of artificial intelligence. Photo: Getty

Q: You mentioned machine learning. Will androids think like humans do?

A: “Thinking” is the fountain from which all personality springs! Humans are guided by conscious thoughts, unconscious thoughts, learned behaviors, instinct, memories, and more — but it’s all some form of thought. So one imagines that “thinking” is so simple, yet it is quite extraordinary.

CPU’s won’t replace that human element — but machine learning can be incredibly effective in handling constrained situations, and learned tasks, tapping into massive stores of data and information to optimize specific decisions.

Q: Walter is a complex robotic unit, paired with equally complex AI, built to serve and function as a perfect companion. If the technology used to build his body was available today, and if AI was developed enough right now, would the current generation of AMD’s processing units be able to provide enough processing power to make Walter as functional as advertised on the meetwalter.com website? If not, what would it take to get there?

A: I’m reminded of an experiment conducted on the Fujitsu K Computer in 2013. That computer simulated 1.73 billion virtual nerve cells connected by 10.4 trillion simulated synapses. It took 82,944 CPUs to do this. More importantly, it took a full 40 minutes to simulate just one second of what the human mind is doing at any one time. So that’s where the world is at today: warehouse-scale supercomputers are 2,400 times slower than the human mind.

At AMD we certainly see opportunities to speed that up with programmable CPU, as well as Graphics microprocessors, like Radeon Instinct, which optimize key aspects of the parallel thinking a human mind might do. Even so, the road to Walter is a long one.

Q: What kind of fail-safes would need to exist in his code and CPU to make Walter safe for humans? What needs to be done to prevent him from getting hacked or turning hostile?

A: This question highlights one of the biggest impediments to wide adoption of AI applications — ensuring there are protections that prevent safety or ethical issues. The first proving ground will be autonomous driving applications, which will require safeguards that could then be applied to other machine-learning applications.

Would you want to own a Walter model? Do you find the concept of androids scary or exciting?

See original version of this story