The robots are here. They’re scanning shelves at your local supermarket , delivering food , and even assisting nurses in the hospital . As the robots begin to infiltrate human spaces, questions remain for designers and engineers tasked with convincing people to view them as approachable and friendly, rather than ignoring or avoiding them?

Addressing the complexity of human-robot interaction was the goal of the designers behind a new robot at the central Oodi library in Helsinki, Finland. The library brought in the digital consultancy Futurice to help it to transform some of its existing robots—which help move books between floors—into ‘bots that could help librarians with other tasks. After interviewing several librarians, the Futurice team decided to reprogram the robots to perform a task that the humans universally hated: Showing customers where the fiction section is, or pointing them toward the bathrooms. It’s a simple task, but time-consuming. Perfect to hand off to a robot.

But there was a problem.

“As we were testing [the robot], people weren’t relating to it as a social object. Some children were jumping on top of it, impeding it doing its job,” says Minja Axelsson, a roboticist at Futurice who designed, coded, and tested the robot. People weren’t necessarily put off by the robot, but they certainly didn’t connect with it. The bot’s form—basically, a box on wheels—was simply too abstract for people to make sense of what it was and how they were supposed to interact with it.

To help people see the robot as a friendly helper, Futurice came up with a simple interface: Googly eyes. Inspired by one of Disney’s 12 basic principles of animation—a list of rules published in 1981 that the company’s animators use to create the illusion of life in their illustrations, including the idea of “exaggeration”—Axelsson decided to use googly eyes combined with sound and movement to both show the robot’s intent and express its state of being. Most importantly, the eyes are programmed to indicate the robot’s direction to customers, so they’re not caught unaware when it’s moving around.

But Axelsson also created a matrix of behaviors to make the robot seem more dynamic, like spinning around and beeping, based on what happened to it. If it successfully led a person to the section they were searching for, its “emotions” would become more positive with high arousal, leading to the robot chirping happily. If it failed, its state would become more negative, and if it wasn’t being used, its emotional state would tend toward low arousal—which would result in it moving around and trying to get people’s attention using its eyes. For instance, if customers haven’t plugged a request into the robot’s tablet in a while, it starts to get “bored” while it sits in its position at the top of the entrance stairs. “If it hasn’t had a mission for a long time, it can start to move around a little bit, to indicate, ‘Hey! I’m here!'” Axelsson says.

If someone then comes over to interact with it and puts a location into the tablet, the robot’s state would change again toward more happy beeping, with its googly eyes leading the customer in the direction they both want to go.