A group of Carnegie Mellon researchers, in association with Disney Research of Pittsburgh, are bringing animations closer to reality by modeling accurate eye blinks.

Conventional systems that model eye blinks have always assumed them to be symmetric. In other words, during an eye blink, a person’s eyelids move down at the same rate that they move back up. While this may be a rational assumption to make, researchers’ high speed cameras a slightly different story: real human eyelids go down quickly during an eye blink, followed by a more gradual opening back up.

While it may not be apparent what difference this minor detail makes, it turns out this is a huge matter of importance for animators striving for realism, especially in big-budget feature-length animated films. Laura Trutoiu, a Ph.D. student in Carnegie Mellon’s Robotics Institute involved in this research, shed some light on why this is the case.

“Because we see so many eye blinks daily, we’re pretty good at intrinsically ‘understanding’ a good eye blink,” Trutoiu explained. “So even though it’s very hard for a person to tell you what is a good eye blink, when you actually see something that’s wrong with one, that’s when you can tell.”

To further illustrate her point, Trutoiu pointed to an interesting study her team conducted. In this study, participants were asked to view over 300 types of blink animations and rate the “naturalness” of each one along the way. In spite of the overall tediousness of the process, all of the participants rated the blinks resembling real data significantly higher than those with simple, symmetric algorithms.

“People might not be able to describe what’s different about them, but they do recognize them as different,” summarized Liz Carter, a research associate in the Robotics Institute also involved with the blinking studies.

There is a lot more to the team’s research than just the speed at which the eye blinks. “There are other interesting points, like how the lower eyelid moves, how the eyes close, and so on,” said Trutoiu.

Taking all of these and other factors into consideration, the research team then utilized tracking software to capture real human eye motions, and generated a data set that described the motions that could be fed into a matrix. Principal Component Analysis (PCA), a way to highlight the important features of arbitrary input information, was then applied to the old data to generate new types of realistic blinking motions.

“Using PCA to lower the dimensionality of the data can also bring out patterns in the original data that would otherwise be harder to detect,” Trutoiu noted.

“In the case of eye blinks, a 150-points time series can be represented with 3–5 principal components.” In other words, PCA takes the plethora of information about eye blink motions, and simplifies that information to its most basic form.

Animating realistic eye blinks on three-dimensional models is a great way for animators to add more realism to their work, but they also have to be wary of falling into a hidden trap.

“So, basically, as robots or animated characters become more realistic, you like them up to a certain point,” Carter said. “[After this point], it becomes really creepy and you don’t like them at all,” she explained.

“It’s the uncanny valley hypothesis,” Trutoiu added, referring to the term coined by robotics researcher Masahiro Mori in 1970.

“It’s hard to say how important eye blinks are overall, but if you want realistic character animations, you have to get everything right. If that means raising the eyebrows or the eye blinks correctly, it’s going to make a huge difference. If you mess up one of those tiny, tiny things, you’ve just ruined the whole image.”

Readers who have seen the 2007 CGI film Beowulf, may be familiar with this effect already. It has been noted by some as lacking the “true spark of life” in its animated moving faces, despite its ultra-realistic attempts. So if you’ve ever been unnerved by that “almost real but not quite” look in Beowulf or other ultra-realistic animations, the uncanny valley hypothesis might have had something to do with it.

The researchers published their study in the Association for Computing Machinery Transactions on Applied Perception, and gave a talk on their findings at the Symposium on Applied Perception in Graphics and Visualization in Toulouse, France, earlier this year.