Though some aspects of the user interface are necessarily immersion-breaking, filmic touches like camera shake, depth-of-field photography and motion blur can make it hard to differentiate between Ryse and a motion-captured CG film, particularly at a mere glance. Of course, the uncanny valley is most squarely focused on human likeness — the term was coined in 1970 by robotics scientist Masahiro Mori to explain why we find technological facsimiles that look like us so disturbing — and breaking free from the confines of the uncanny is where Crytek has arguably labored the most, creating some of the most realistically rendered game characters to date.

When it comes to believability, this focus is to be expected.

“The biggest [factor] to get over the uncanny valley is definitely the facial animation,” says Crytek US engine business development manager Sean Tracy. “That’s the thing that breaks more often than anything, is the faces of the characters.”

A player’s sense of unease tends to go from non-existent to extreme is when a character face itself breaks, like, say, when a glitch causes a character to clench their teeth in an unnaturally horrifying open-mouth smile when they’re supposed to be speaking lines of dialogue. (This has actually happened to me in a big budget game, though it wasn’t Ryse.)

These breakdowns can be the result of having a low number of vertices (making up an array that defines the edges and shape of a rendered object) available to influence each bone in a model’s facial skeleton. The less vertices per bone, the harder it is to accurately mold complex layered textures like skin. Typically four is the number that animators use, but Crytek has doubled that, developing what they call “eight weight skinning”.

“When you have a really dense face, it’s a little bit tricky to only have four bones influence a single vert because you can’t do the folds, you can’t get things around the nose or around the mouth deforming the way you would really expect it to deform,” Tracy says.

Crytek has been working towards perfecting realistically rendered faces for years in games like Ryse and Crysis 3 (right).

Eight-weight skinning is only part of Crytek’s realistic face equation. In addition to motion-capture, which the team did with the help of an outside effects house, the next step is using corrective blend targets. Think of these as kind of a composite crafted by the engine choosing from a library of facial models based on the current animation of a character’s face.

That library of “morph targets” is how animations were typically done before tech advancements changed the game.

“Basically you would have maybe 90 or 100 models of this face in different sort of shapes, so he might be saying ‘O’ or ‘Yea’ or whatever those different phonemes are that we want from the lips,” Tracy says. “So in the past you would actually just blend in different morph targets depending on what he’s saying.”

Now that more primitive process is coupled with the performance capture data.

“When we’re doing a certain bone animation — for example when [protagonist Marius Titus] is screaming, we’ll actually blend in a sort of screaming morph target during the bone animation. So what happens is you get a mix of the morph target, plus this bone animation,” Tracy says.

That may all seem pretty technical, but the result is a face free of any unnatural mathematic tearing at its seams — sort of an animation equivalent of using Photoshop’s healing brush.

“That’s why these are corrective,” Tracy says. “[We’re] sort of fixing the mouth so it doesn’t get completely torn apart, because typically in games you do lose some control of the vertices, especially on the outer edges, so [Marius’] mouth might look way too wide or something.”

Complications notwithstanding, there’s no one-step solution to effectively combining bone animations and corrective blend targets.

“There’s not a lot of magic in terms of technology for the facial system. That’s a lot of really hard work by a lot of artists in Ryse.”