Driving cars, communicating, making music – none presents a challenge to robots these days. In autumn 2013, the first opera to feature performing, musical robots premiered in Philadelphia and New York.

Rolf Lakämper, Professor for Robotics at Temple University in Philadelphia, worked with composer Maurice Wright to create the opera. Almost 30 years ago, the mathematician founded Germany’s first gaming company “Magic Bytes” and the 8-bit game Mission Elevator. Today he develops autonomous robots. Rolf talked to us about what robots can do these days, what they will be able to do in future – and what they probably won’t.

Rolf, you research and develop autonomous robots. They roll around the Temple University campus in Philadelphia and talk to students. Do you tell them where to go – like with a remote controlled car – or do they decide for themselves where they want to go?

The robots steer themselves autonomously, of course. I take them to campus and tell them to roam around to explore their world. They have to locate themselves in space. Each one has a built-in camera and a laser scanner to create a map of the area and to identify its location on the map at the same time. It explores the university campus completely on its own.

After the robot has moved around the area for a while, could you ask it where the dining hall is?

Yes, I could do so. But to answer such a question, the robot needs to name or to label the newly discovered places. That’s why, by the time it locates a new item in the environment, it will ask a student about its name – and now it knows where the dining hall is, for example. Hence, from then on, it can solve navigation tasks, for example finding the best way to get to the dining hall.

How does a robot talk to students?

It has a laptop with an integrated microphone and speaker that runs language and language recognition software. There is nothing particularly special about it; the software has been around for a while.

Seeing and speaking are no problem, but understanding and recognition are still tough.

What is currently the biggest challenge in robot development?

Seeing and speaking are no problem, but understanding and recognition are still tough. In other words: evaluating the content and recognizing connections is the challenge. Robots initially just see a lot of tiny pixels they need to unify. Understanding an environment is quite another task. A step to it is classification and inference – for example, when a robot classifies the data it sees as "street", it will also expect cars driving on it.

How do you build your robots, do you have your own workshop?

I don’t build the robots, nor do I develop the language recognition. Those are off the shelve. I just create the bit that goes in behind the eyes, the programming of algorithms for visual understanding. My robots do not look particularly impressive: they are more like a box on three wheels with technical looking lasers and cameras on top. However, now that we have 3D printers, I also sometimes print out smaller robots myself using MakerBots, the plastic printers you see everywhere. I print out the plastic components and install tiny motors and computers.