Researchers at the University of British Columbia enlisted the help of a human-friendly robot named Charlie to study the simple task of handing an object to a person. Past research has shown that people have difficulty figuring out when to reach out and take an object from a robot because robots fail to provide appropriate nonverbal cues.

“We hand things to other people multiple times a day and we do it seamlessly,” says AJung Moon, a PhD student in the Department of Mechanical Engineering. “Getting this to work between a robot and a person is really important if we want robots to be helpful in fetching us things in our homes or at work.”

Moon and her colleagues studied what people do with their heads, necks and eyes when they hand water bottles to one another. They then tested three variations of this interaction with Charlie and the 102 study participants.

Programming the robot to use eye gaze as a nonverbal cue made the handover more fluid. Researchers found that people reached out to take the water bottle sooner in scenarios where the robot moved its head to look at the area where it would hand over the water bottle or looked to the handover location and then up at the person to make eye contact.

“We want the robot to communicate using the cues that people already recognize,” says Moon. “This is key to interacting with a robot in a safe and friendly manner.”

This paper won best paper at the IEEE International Conference on Human-Robot Interaction.

Abstract of Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction paper

In this paper we provide empirical evidence that using humanlike gaze cues during human-robot handovers can improve the timing and perceived quality of the handover event. Handovers serve as the foundation of many human-robot tasks. Fluent, legible handover interactions require appropriate nonverbal cues to signal handover intent, location and timing. Inspired by observations of human-human handovers, we implemented gaze behaviors on a PR2 humanoid robot. The robot handed over water bottles to a total of 102 naïve subjects while varying its gaze behaviour: no gaze, gaze designed to elicit shared attention at the handover location, and the shared attention gaze complemented with a turn-taking cue. We compared subject perception of and reaction time to the robot-initiated handovers across the three gaze conditions. Results indicate that subjects reach for the offered object significantly earlier when a robot provides a shared attention gaze cue during a handover. We also observed a statistical trend of subjects preferring handovers with turn-taking gaze cues over the other conditions. Our work demonstrates that gaze can play a key role in improving user experience of human-robot handovers, and help make handovers fast and fluent.