Crowdsourcing can be a quick and effective way to teach a robot how to complete tasks, University of Washington computer scientists have shown.

Learning by imitating a human is a proven approach to teach a robot to perform tasks, but it can take a lot of time. But if the robot could learn a task’s basic steps, then ask the online community for additional input, it could collect more data on how to complete this task efficiently and correctly.

So the team designed a study that taps into the online crowdsourcing community to teach a robot a model-building task. To begin, study participants built a simple model — a car, tree, turtle and snake, among others — out of colored Lego blocks. Then, they asked the robot to build a similar object.

But based on the few examples provided by the participants, the robot was unable to build complete models. So to gather more input about building the objects, the robots turned to the crowd.

They hired people on Amazon Mechanical Turk, a crowdsourcing site, to build similar models of a car, tree, turtle, snake and others. From more than 100 crowd-generated models of each shape, the robot searched for the best models to build based on difficulty to construct, similarity to the original and the online community’s ratings of the models.

The robot then built the best models of each participant’s shape.

“We’re trying to create a method for a robot to seek help from the whole world when it’s puzzled by something,” said Rajesh Rao, an associate professor of computer science and engineering and director of the Center for Sensorimotor Neural Engineering at the UW. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”

Goal-based imitation

This type of learning is called “goal-based imitation,” and it leverages the growing ability of robots to infer what their human operators want, relying on the robot to come up with the best possible way of achieving the goal when considering factors such as time and difficulty.

For example, a robot might “watch” a human building a turtle model, infer the important qualities to carry over, then build a model that resembles the original, but is perhaps simpler so it’s easier for the robot to construct.

Study participants generally preferred crowdsourced versions that looked the most like their original designs. In general, the robot’s final models were simpler than the starting designs — and it was able to successfully build these models, which wasn’t always the case when starting with the study participants’ initial designs.

The team applied the same idea to learning manipulation actions on a two-armed robot. This time, users physically demonstrated new actions to the robot.

Then, the robot imagined new scenarios in which it did not know how to perform those actions. Using abstract, interactive visualizations of the action, it asked the crowd to provide new ways of performing actions in those new scenarios.

More complex tasks

The UW team is now looking at using crowdsourcing and community-sourcing to teach robots more complex tasks such as finding and fetching items in a multi-floor building. The researchers envision a future in which our personal robots will engage increasingly with humans online, learning new skills and tasks to better assist us in everyday life.

“Service robots in the home or in workplaces will be faced with tremendous variability in the situations they need to operate in,” said Maya Cakmak, Assistant Professor, Computer Science & Engineering, University of Washington, in an email to KurzweilAI. “Crowdsourcing can be a scalable solution for customizing these robots to properly function in each particular environment, based on the preferences of each particular user.

“Our work also proposes a new framework in which the crowdsourced task is seeded by local users, rather than being directly requested by researchers. This can lead to new theoretical work on the incentive models behind this collaboration between end-users and crowd workers.

“Our work is complementary to other work on using crowdsourcing to train robots, in that different research groups have looked into crowdsourcing different learning problems. We have looked into high-level task descriptions (ICRA paper) and two-armed manipulation actions (HCOMP paper); while other groups explored learning executions of natural language commands (Cornell) or learning object grasps (WPI).

So when can we expect this innovation to be available commercially? “I would argue that robotics companies will initially provide remote robot training services themselves,” said Cakmak. ”This is just around the corner, perhaps within two to three years. Crowdsourcing of such services might take more time and I think the main roadblocks are quality control and privacy of the end-users.”

Other research teams at Brown University, Worcester Polytechnic Institute and Cornell University are working on similar ideas for developing robots that have the ability to learn new capabilities through crowdsourcing.

The research team presented its results at the 2014 Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation in Hong Kong in early June.This work will be presented at the Conference on Human Computation and Crowdsourcing in November.

This research was funded by the U.S. Office of Naval Research and the National Science Foundation.



Computer Science and Engineering, University of Washington

Abstract of Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation presentation

Although imitation learning is a powerful technique for robot learning and knowledge acquisition from naïve human users, it often suffers from the need for expensive human demonstrations. In some cases the robot has an insufficient number of useful demonstrations, while in others its learning ability is limited by the number of users it directly interacts with. We propose an approach that overcomes these shortcomings by using crowdsourcing to collect a wider variety of examples from a large pool of human demonstrators online. We present a new goal-based imitation learning framework which utilizes crowdsourcing as a major source of human demonstration data. We demonstrate the effectiveness of our approach experimentally on a scenario where the robot learns to build 2D object models on a table from basic building blocks using knowledge gained from locals and online crowd workers. In addition, we show how the robot can use this knowledge to support human-robot collaboration tasks such as goal inference through object-part classification and missing-part prediction. We report results from a user study involving fourteen local demonstrators and hundreds of crowd workers on 16 different model building tasks.