Last November, in the “Clueless Gamer” segment of his late-night talk show, Conan O’Brien played a few levels of Call of Duty: Advanced Warfare. The video game stars Kevin Spacey as the scheming president of a private military firm; his performance is rendered in striking digital detail, save for one conspicuous flaw. “Can’t they fix his eyes?” O’Brien wondered. “They hired one of the greatest actors in the world and then they give him the eyes of a carp that’s been in the refrigerator for three days.”

Creating computer-generated eyes that, in O’Brien’s words, “actually have life in them” has been a perennial challenge for visual-effects artists and animators. Like long hair and loose clothing that don’t conform to the laws of physics, soulless eyes can send a movie or video game hurtling into the so-called Uncanny Valley, that realm of nearly-but-not-quite where C.G.I. characters look just inhuman enough to be profoundly disconcerting. Though there have been considerable advances in motion-capture technology in recent years, allowing for the digital replication of an actor’s performance, the eye is often synthesized from scratch, a time-consuming and painstaking process that still leads to frequently mediocre results. It is the equivalent of Leonardo da Vinci letting someone else slap a couple of peepers on the canvas while he goes home early.

Late last year, however, at a computer-graphics symposium in Shenzhen, China, a team from Disney Research Zurich outlined a solution in a paper called “High-Quality Capture of Eyes.” The key to a convincing digital eye, the group argued, was what one of its members called “perfect imperfections.” Animators commonly render eyes as generically geometrical, building them out of two spheres: a large one for the eyeball, and a portion of a smaller one for the protuberance of the cornea. The Disney Research team had discovered, however, that the eye, far from being spherical, is actually irregular. The group’s other assumptions—that the left and right eyes are identical, and that the surface of both is smooth—turned out to be equally false. The left and right eyes are, in fact, more or less mirror versions of each other, and the sclera, the white of the eye, is covered in small bumps. Furthermore, the blood vessels in the sclera, the mechanics of the multilayered iris, and the sheen of the liquid tear layer all vary considerably from person to person. In short, the eye is far more dynamic and idiosyncratic than the industry had so far accounted for.

On the basis of these realizations, the Disney team devised a new kind of examination that captures the eye’s intricacies in high resolution. A subject, lying face up with her head stabilized and her eyes opened as widely as possible, is asked to focus on a range of points at varying distances and to gaze in eleven different directions: straight ahead, up, down, left, right, up left, down right, and so on. Meanwhile, an array of six cameras positioned above her takes a series of photos, first to capture the surface variation of the sclera. Then, color L.E.D.s are used to highlight and map the shape of the mostly transparent cornea. Pupil dilation—the cinching and uncinching of the muscle around the iris—is manipulated with a white L.E.D., shone into the eye at differing degrees of brightness. The process, which takes about twenty minutes per eye, results in some hundred and forty images.

Using their data, the group was able to reconstruct nine eyes in striking digital detail. The renderings include a pair of noticeably larger eyeballs (those of a short-sighted subject), instances of pinguecula (a common and benign growth on the sclera), and wild variations in iris shape and behavior. Their conclusion, as they put it in their paper, was that the eyes, especially the microgeometry of the iris, are “as unique to every person as a fingerprint.”

Nathan Radcliffe, an ophthalmologist at New York University, told me that he was impressed with the Disney group’s results, especially the attention paid to the way that light reflects off the dome-shaped cornea, an effect similar to that of a straw appearing to bend in a glass of water. This, he noted, is an essential aspect of making eyes look realistic. He added that the technology might also have clinical applications. “Potentially, by modelling them in three dimensions, using this technique, you could get information on whether they’re changing or growing over time,” he said. “Change in the eye is often a sign of disease.”

Of course, as authentic as the disembodied digital eyes seem by themselves, the real test will be to see how they look in situ. The Disney team suggests that, to save animators even more time, 3-D-rendering software could be equipped with a sort of artificial eye intelligence, allowing a C.G.I. character’s pupils to expand and contract of their own accord as lighting changes in a scene. Even so, one of the team’s scientists, Derek Bradley, was quick to point out that ultimate creative control and license will remain with the artists. “Our results can provide a great reference or starting point,” he said. “The goal is not to replace the artistic method.” (This has been a cause of controversy in the industry since 2014, when Andy Serkis, the English actor best known for his motion-captured C.G.I. performances—as Gollum, King Kong, and Caesar the ape—equated animation work with the mere application of “digital makeup.”)

Meanwhile, Bradley and his colleagues at Disney Research have acquired enough of an understanding of the eye to identify an area for further research: movement. “A lot of the dead-eye look can come from the animated motion of the eye rather than the static shape,” Bradley told me. “If the dynamics of the eyes are not a hundred per cent correct, then it’s something people pick up on.” As for when audiences can expect to see the results of this research, Bradley said that they may be onscreen “in the near future”—digital eyes that, finally, satisfy our real ones.