If you’re a filmmaker trying to bring fantastic characters to life with the most realistic, lifelike performances possible, you’re probably going to want to talk to Joe Letteri. The longtime visual effects supervisor has been working with computer-generated characters going all the way back to James Cameron’s The Abyss. As part of WETA Digital, he helped the director bring the Na’vi to life in 2009’s Avatar. Peruse Letteri’s filmography, and it’s hard to find a film he worked on that didn’t break significant new ground in one form or another.

Over the past six years, he’s been part of the Planet of the Apes trilogy, which hasn’t just used CG characters to surprise and awe audiences, but to carry the increasingly complex emotional weight of the films themselves. The latest installment, War for the Planet of the Apes, amps up the story (and the special effects) even further, with Andy Serkis’ Caesar heading out on a revenge mission that takes him through extreme snow and other conditions that would have been impossible to create just a few years ago.

I jumped on the phone to chat with the affable Letteri about working with director Matt Reeves, the challenges of 65mm film, the evolution and iteration of the performance-capture process, and how to create a CG forest that’s so realistic you can’t even tell that it’s an effect at all.

Making Caesar break

You worked with Matt Reeves on the last film, Dawn of the Planet of the Apes, but this film pushes the effects in impressive new directions. When you sat down to talk about War, what was his creative mandate?

Well, I think what Matt was after this time was more of that Exodus kind of feeling. This was really a story where he wanted to take Caesar, and all of us, beyond where we'd been comfortable before. Going all the way back to Rise, this is a story about a character who straddles two worlds. Caesar grew up in a human household, and pretty much thought he was human, until the world intruded and said, "Now you can no longer live with humans." They sent him off to the ape enclosure, and suddenly he had to find this whole other identity.

“He wanted to take Caesar beyond where we’d been comfortable before.”

But through the first two films, when humans started becoming aroused by the conflict — the fear of what could be happening because of the apes gaining intelligence — Caesar always tried to see both sides of it. He always just tried to bring peace. In the third film, Matt wanted to go beyond that. He wanted Caesar to finally break, to experience that rage he's been trying to keep under control the whole time.

So he goes to some real emotional depths in this one to find out who he is, and what he's got to do to keep his his tribe alive. And that was really the heart of it. It was more about the performance, and that arc. And then everything we have to do technically comes out of that. The fact that Matt wanted to go even deeper out away from civilization, shooting in the wilds of Canada, and having these big snowscapes and these big vistas; places where doing this kind of highly technical sort of work is hard. We wanted to push beyond all that to tell this story.

Performance Capture 101

The core technique in building these creatures is recording the performance, and while everybody has seen pictures of actors in crazy suits, it’s easy to breeze by what’s actually going on. There’s a beautiful shot where Caesar and a few other apes are riding across the beach on horseback. Can you walk us through the creation of that shot?

For a shot like that, the actors are riding the horses. Normally when you're doing something really performance-driven, we do performance capture, which is putting up a lot of specialized cameras all around the set to record the motions from different angles. The actors are wearing a helmet with a head rig and a little camera mounted in front of their face. We use that information to construct the body motions of the apes and the motions of the apes' faces. And then we have to track the horses in place, because everything we do has to fit into the three-dimensional real world that's happening in front of the camera.

So we need to know exactly where those horses are, where every limb is for every frame, because we have to reconstruct that to be able to paint the actors out, and put [the apes] on top of the horses. Some of it is done with hand painting. Some of it is actually done with CGI horses that are either wholly or partly reconstructed to fit under the apes, because you've got different body types. Apes and humans are close, but not close enough. So there's a lot we have to reconstruct to make that believable.

And we have to compute all the lighting out there in the real world. Whether it's natural sunlight coming in through clouds or bouncing off the sand or off the water, or if there's any artificial light that's added. That sequence was shot with natural light. But we have to account for all of that just like you do in cinematography, match all the camera moves, and then we start this intense round of computation to run everything through the computer and make sure it all does the right thing. That the fur simulates the right way. That the light bouncing all around in that virtual world matches up to the light bouncing around in the real world, so we can composite all the elements together. And once you do all that, you should come out with your shot in the end.

And what is Matt working with on set? Is he looking at a previsualization of what it will look like with the apes composited in, or is he just working with the actors?

Just worried about the actors. The thing that's really good about these Apes films is that the chimps and humans are more or less the same size, so we can frame them as they need to be. You don't have to do a visualization to imagine, “Well, what is this going to look like?” Because it's not a 25-foot-tall character. Also, the camera operators can concentrate on the movement of the actors, because what they shoot is exactly what's going to be in the film for framing. It's also great for the editors, because they have exactly what the actors are doing to work from for their cut.

How has the performance capture process evolved as you’ve been working on these films?

The big breakthrough came on Rise of the Planet of the Apes when we figured out how to do this live, and have it coexist with the rest of the motion picture photography. Up until then, performance capture tended to be an after-the-fact process. Andy would go out and do his scenes on set — say, when we were doing Gollum — and whatever selects we'd like, he'd go back and do them in a separate volume [a motion capture stage]. So the ability to capture those performances simultaneously with all the other actors was something we really pushed for on Rise.

Once you have that freedom, you want to take it further and further. And that's where Matt really pushed this. On Dawn, out into the wilderness, out into the rain, a little farther from civilization. And then on War: really far out from civilization. Harsh, wet, cold conditions. But you're capturing subtle, nuanced performances, so the gear has to perform. There's a whole crew there to support gathering all that information.

So there are technical breakthroughs, but they're all behind the scenes. They have to do with getting better ways of connecting the systems and calibrating it, and putting wireless together to make sure the data all comes through without any dropouts. But the fundamental aspect of how it works is still basically the same. You're just trying to record every possible angle of what the actors are doing so you can reconstruct it later.

Harsh weather ahead

The fur on these characters isn’t put together by hand for every frame. You’re using intense computer simulations that determine how it moves. But you’re also putting Caesar in extreme conditions like snow and rain. How does that complicate the process?

“Doing fur is complicated.”

Fur is complicated. You've got millions and millions of little fibers that all have to react to gravity, and to themselves, and to all the lighting. There's lots of simulations going on. So they get wet, water runs off the fur, that adds another layer of complexity. When they're rolling around in the snow, or snow's accumulating, it gets even more complicated, because snow packs on the fur. You think you understand the physics — now it gets compounded by adding these icy packs that are constantly evolving and flaking and falling off.

We have to run a whole extra level of simulation to make all of that work. Plus snow has a way of affecting the lighting on the fur, and that all has to get computed as well. So yes, we do a lot of physical simulations, and also light transport, to arrive at both the correct motion and the correct photography.

Did that require a new round of software development?

A few years ago, we started writing our own renderer, which we call Manuka. It's the software that computes all the lighting in the scene, and all the surface characteristics, and bounces all the light around, and computes what gets to the sensor, and what the picture should look like. So it's the digital version of photographing your scene.

We broke it out for the first time on Dawn, but because it was so new, we didn't throw the close-ups at it. We only did it for the background characters, because it was good at handling big scenes with large amounts of fur in them. But on this film, we finally have pushed it far enough, and got it robust enough, to do all the close-ups with it. So I think you'll see the difference when you look at the close-ups in this film, vs. what you saw earlier in Dawn.

There’s a shot with Maurice looking at a young girl (Amiah Miller) that was so realistic, it actually bumped me out of the movie for a moment, just to gut-check myself. But there’s also a lot of nuanced performance work for Caesar. How much of that is Andy, and how much of that becomes the animators?

From the beginning, going back to Rise, we knew the apes would be completely digital. We didn't look at doing anything with prosthetics, because the whole point of the story was, they had to look realistic, so you would believe it when they start to evolve. So the combination has always been, you've got the actor's performance, but what you see on the screen is fully digital. The emotional drama is there. It's given by Andy; it's given by the other actors.

“The emotional drama is there. It’s given by Andy.”

Where we would step in is, things that can't be captured, or that need to be adjusted because of differences in the apes. As much as the actors train to do the proper ape motion and ape behavior, there are still differences. Human legs are longer than ape legs, and arms are shorter. So there are always slight adjustments. You lower the hips a little so it works with the longer legs, which means the shoulders are in a different position, but you've got to get the head right back to where it was, so the eyeline works and the attitude works. So as animators, those are the kinds of things we're trying to do so you don't think about the differences between humans and chimps.

Working with 65mm

Matt Reeves has said he wanted this film to feel like an old epic, or a David Lean film. One of the ways he got that feel was shooting on 65mm film. How did that impact your work on the visual effects side?

It's like anything worth doing: there is always extra work involved. It's a great format. It's widescreen, but it's got shallow depth of field. Which is beautiful for the photography. It's great for really getting that separation between characters, and between foreground and background. It's got a tremendous filmic quality, but when you're dealing with having to paint actors out and replace them with digital characters, the more you have to work with, the better. Technology always wants you to have the simpler thing, because that makes life easier. But creativity wants you to have the more complex thing. And creativity always wins. It's like, we'll figure out the technology if that's what Matt wants the movie to look like, and that's definitely what he had in mind.

“Technology always wants you to have the simpler thing, because that makes life easier. But creativity wants you to have the more complex thing.”

So you’re saying shallow depth of field makes it harder to paint an actor out of a shot because they might be out of focus?

It does. You were saying before: actors on horses. So say you've got a whole bunch of actors on horses, and you've got to paint out the actors' legs to put the chimp legs on the horses. Now the actors' legs are longer, so there's always some of the horse that you've got to paint back to track back in. So if this is in the background, and that little bit is out of focus — well, how do you know exactly that you've got the track working for the digital horse body you're putting in? Because they have to match up hair to hair. So you can't see it by looking at the frame; you have to play it back in motion and just use your judgement.

You would think that it being slightly soft means it's more forgiving, but it's actually not. Because you can still perceive if anything is sliding, you just don't have anything to lock into.

Simulating mother nature

What about the environments? Was anything there a particular challenge?

Yeah, one of the things we did on this film was, we grew the pine forest up behind the fortress [where the climax of the film occurs]. In the past, we've built lots of jungles, and trees have incredible variety. The modelers have to make each tree by hand, and it's a really time-consuming process. And then you art direct them into place to try to get the whole layout to look natural.

“We sprinkle a bunch of seeds, and run a simulation that lets the trees grow for a few hundred years.”

What happens with trees is, they don't grow isolated. They grow in groups, and the same species of tree will give you an infinite amount of differences, depending on how they grow. So we've written a system we call Totara. Now, rather than growing trees one by one, we grow them in groups, so they compete for resources. The older ones overtake the smaller ones. Sunlight and shadow determines which side might get more branches, which side doesn't. And the end result is, we build the terrain, and then we sprinkle a bunch of seeds around, and then we run a simulation that lets the trees grow for a few hundred years. And when you look at that, you get something that immediately looks natural, as opposed to something that looks like you built a forest from a lot of individual trees. Hopefully that's something people don't notice, because it should feel completely natural, but it's a whole new way of approaching environments.