RoboThespian greets you – but who controls it? © The Board of Trustees of the Science Museum

RoboThespian welcomes visitors to the opening of Robots at London’s Science Museum with suitable drama. The life-sized humanoid blinks its pixelated eyes, moves its head and gestures theatrically as it introduces the exhibition with great enthusiasm. You might expect the robo-actor to give you a guided tour – if it wasn’t bolted to the floor.

But move on a step and the illusion is shattered. Behind a wall sits engineer Joe Wollaston, with a computer and a headset. From here, he can see and hear people approaching RoboThespian through a camera and a mic on the robot. When he speaks, his voice booms out of the robot’s mouth.

Wollaston is RoboThespian’s Wizard of Oz, and this is a peek behind the curtain. “What you just saw was an example of our telepresence application,” he says after the robot’s introductory speech. “So it’s actually remotely operated.”


RoboThespian can recognise people’s movements and deliver programmed messages, but a human has to step in for anything more complex. It is, as Wollaston says, “artificial AI”.

The Silver Swan is an intricate automaton ©The Board of Trustees of the Science Museum

This illusion of intelligence is one of the underlying messages of the exhibition, which tracks 500 years of robots, from the earliest automatons to present-day research. Pinned to the next wall is an eerily realistic animatronics baby, commissioned from a special effects company. The baby wriggles its arms and legs and even “breathes”.

It is convincing, but its brain is still a long way off that of a newborn – all of the movements are pre-programmed. In this respect, there’s not that much difference between the baby and much earlier robots such as the Silver Swan (shown above), an intricate automaton made in 1773 that twists its neck to preen its feathers, dips its head into a river of glass rods and catches a silver fish in its beak. The baby uses modern programming, the swan runs on clockwork, but both impress by performing a physical display of an intellect they don’t actually possess. They’re just going through the motions; they don’t have a brain.

Despite its age, the Silver Swan is a highlight of the show: even beside the most recent and impressive humanoid robots it is a wonderful thing. Perhaps it is because it is not trying to emulate a human that it continues to inspire awe.

If one thing becomes clear from the exhibition’s journey through attempts at building robots in our image, it’s that we still haven’t cracked it. A 16th century automaton monk can walk across a tabletop, lift a crucifix and pray. Skip forward in time to the present day and we’re struggling to refine bipedal robot legs capable of naturalistic walking and dexterous hands capable of human-like precision.

ROSA mimics human movement ©Plastiques Photography, courtesy of the Science Museum

While we worry about superintelligent robots turning Terminator, the challenges roboticists face are much more mundane. Stairs, for example. “Humans are pretty much the cutting edge of, well, human ability,” says Anna Darron, one of the curators. “To build a machine that can do everything that we do is a massive challenge.”

The most painstaking attempts at mimicking human movement use human anatomy as a starting point. CRONOS ECCE1 and Rob’s Open Source Android (ROSA) both take this approach, with articulated skeletons, motorised muscles and artificial tendons (made from string in ROSA) on display.

Why go to so much trouble? There’s more than a hint of narcissism in our obsession with making humanoid robots – which, Darron points out, date all the way back to Greek legends of mechanical people – but there are also pragmatic reasons to favour the human form.

Kodomoroid reads the news ©Plastiques Photography, courtesy of the Science Museum

“On a practical level, having a human-like machine or a machine with human-like abilities enables it to work in a human environment,” she says. “We build the environment for ourselves – we don’t want to have to adapt it for a machine.”

To be fair to the humanoids and their makers, this is also a major reason why building a useful human-like robot is so much harder than building a swan that looks like it’s swimming when you turn a handle. Our environments aren’t predictable, so a robot that can walk around in a real-world setting would need to be able to cope with different terrain, navigate around furniture, and avoid bashing into humans that get in its way.

To do this, these robots need some of what we call AI – a level of agency beyond the automaton swan or animatronic baby. They use sensors to see and feel the world around them and calculate how to react. ROSA, like RoboThespian, has face-tracking software that allows it to follow visitors with its head or eyes as they move.

The final room of the exhibition showcases robots that are already sharing our space today. Some are designed purely to entertain, like Honda’s iconic ASIMO or Toyota’s trumpet-playing Harry. Others are intended to serve, like Japanese roboticist Hiroshi Ishiguro’s startlingly lifelike newsreader Kodomoroid or Toyota’s robot nurse prototype Human Support Robot. And then there are those that are put to work, like Rethink Robotics’ Baxter and ABB Robotics’ Yumi, both designed for factory assembly lines.

The iCub has the ability to learn ©Plastiques Photography, courtesy of the Science Museum

These robots are a joy to watch. Some can make facial expressions or track the movements of people around them. Yumi twists its arms in a manner curator Ling Lee compares to a “yogic contortionist”. But each robot is only capable of doing the thing it’s designed to do. Give Baxter a trumpet and it won’t make a sound; put Harry on a production line and it won’t make a thing.

That is beginning to change. As AI advances, we are starting to develop robots that can learn. The last robot that visitors meet at the museum is iCub, a humanoid the size of a young child developed at the Italian Institute of Technology. The iCub platform, which runs on a separate computer, uses artificial neural networks to learn about the world through observation, just like a child. Show it a box while saying “this is a box” and it will learn to recognise the object. Guide it to move on its feet and it will learn to walk.

However, the neural networks still have to be customised for each task, says research director Giorgio Metta. The robot may look like a five-year-old, but its mental ability is nowhere near. “The intelligence we manage to put into these machines is really very limited and domain-specific,” he says. “Maybe we solve one problem, but transferring from one problem to another is very difficult – while a child will immediately learn something and the day after re-use that knowledge in a new domain.”

To make a robot capable of learning the way we do requires something we don’t yet have: general AI, artificial intelligence that can perform a wide range of tasks. Only then will we have a robot that truly behaves like a human, with no wizard behind the curtain.

Robots takes place at Science Museum, London, from 8 February to 3 September