One of those reasons: crazy cute fur ball stand-ins.

The Pokémon in Rob Letterman’s live-action Detective Pikachu are all computer-generated. When I watched the film I felt there was something about how those CG characters – which still retain their cartoony feel from the original source material – seemed integrated into the live-action plates in ways that superseded what had been done in other films that tend to be described as ‘live-action/hybrids’.

Why was this, exactly? Well, I had a chat with MPC visual effects supervisor Peter Dionne about the entire process – from filming with stuffies and stand-ins, to on-set data capture, to the immense rotoscoping challenges, and to replicating the lighting on the CG characters themselves.

Dionne, who worked under production VFX supe Eric Nordby (who also hails from MPC and oversaw CG Pokémon work by MPC, Framestore and Image Engine) lays out below what he sees as the reasons behind the successful integration of so many digital characters into the live-action film.

Reason #1: Puppeteers, stuffies and stand-ins

Peter Dionne (visual effects supervisor, MPC): We were very fortunate before we went to camera that we had almost all of our characters designed and signed off on, not just as 2D concepts, but also quite far ahead in surfacing and lookdev. So we were able to create quite a bit of on-set reference that represented the final product in the form of stuffies, whether they were just static stuffies for position reference and lighting reference, or puppeteered stuffies.

We had a couple puppeteers on set that were performing live with the actors. Our typical routine was, for a typical Pikachu shot, we would do the blocking with the puppeteer and block out Pikachu in the space with the actors. We had pre-recorded all of Ryan Reynolds’ dialogue, and that had been well-worked out prior to set, and there was a performer on set who would be able to deliver his lines in his cadence. Then once we blocked out the scene and it was time to roll, we would usually do the first one or two takes with the puppet in camera, performing live with the actors.

Once the actors were comfortable, John Mathieson was comfortable, Rob Letterman was comfortable, we’d pull the puppeteers and puppets out, and shoot a few takes clean. Now, there were a lot of times when those first initial performances with the puppets in there were pretty valuable. So, every setup we did we also shot extensive reference photography that could be used in the event that we needed to remove the puppeteers from the plates on a selected take, which we ended up doing quite a bit of.

For other characters, say where there are dozens and dozens of them in the streets, the main concern for us was just making sure that everyone had an awareness of the space that these Pokémon would fill. We’d preassign which Pokémon would go where and talk through the background performers. Often when we had large Pokémon that we wanted to inject to a scene, such as walking down the sidewalk, we had balls on the end of a long stick that we would stick around their waists.

For Pikachu, we had several of stuffies or stand-ins. Because we spent so much time on location in London in the winter, in Scotland in the mud, we had different levels of puppets and stuffies that were willing to get in the muck. We had a hero one, which had really nice fur and glass eyes. We had one that had really detailed surfacing that we used quite a bit in clean environments, but that was just static.

For the actual puppet, which got banged around quite a bit over the production, that one was made out of foam, but in the volume of Pikachu. Then whenever we used those ones, we also had a texture ball with Pikachu’s fur that we could swing out as reference, and then wrap it back up so it wouldn’t get dirty.

Reason #2: Shooting decisions

I thought it was a brilliant move by Rob Letterman and the DOP John Mathieson to choose to shoot the whole thing on film, and choose to have this really gritty film noir lightning design, and shooting as much as possible on location. We had a surprisingly little amount of greenscreen sets that you would normally expect for a film of this nature.

Rob wanted to shoot as much on location in Scotland and in London, and then even for the CG environments we were adding, he really pushed to try and keep them as not too glorious, but much more realistic with natural lighting.

The grain of film itself was surprisingly a blessing. The natural grain and lens artefacts and light response that you get that we’re able to incorporate into the final shots -it just elevates the image instantly. So it was a gift.

Reason #3: Capturing and replicating the on-set lighting

With every location and every lighting setup, we would do quite an extensive Lidar scan, even if we were not going to rebuild the environment in CG, we still capture it in Lidar, so that we would have a really good sense of the space. We also, using HDRI, would capture and measure all of the lights on set, often from different positions as well. Then, using that photography and the measured light, we’d re-build the lighting in that environment for each set. Plus we had the conventional kind of chrome and gray spheres for lighting reference, and then those texture spheres for all the characters, which had the same sorts of materials for each character.

https://www.instagram.com/p/Bw6yLfDhd_j/

And then back at MPC, for Pikachu, because he was going into a 1000 shots, we made sure that he was really put through the paces when we were building his asset and balancing the lookdev. So even before we put him in a single shot, we had about maybe six different lighting environments that ran the gamut of daytime, sunny, nighttime, overcast, you name it. We really balanced just how he and all of this material responded to light. We ended up with something where we could throw Pikachu into any lighting environment, and he would deliver back a very consistent product for us.

Also what we had done was take several very clean digital still photographs of the same thing that we were shooting with the film camera. That meant we were able to bring them in together and analyze the two side-by-side, just to seeing how they responded differently – the character or the set. And then in NUKE, we were really able to layer in our recipe of how much chromatic aberration, how much distortion, how the light blooms and just get the grain perfect. All these little nuances, we were able to come up with a set recipe that we used across the board for the show.

Reason #4: Roto and paint artists

In pre-production, we were strategizing with Eric Nordby about how we’d approach the vast crowd scenes, where we’d need to add 200 Pokémon onto a street. Now, there are ways that we could shoot in layers and clusters with greenscreens behind them to make it easier for integration, but the more we got into it, the more we realized that it would just kill the spirit of what we were ultimately trying to achieve, which is as much realism as possible, by trying to split into too many elements to comp together.

So we just decided to bite the bullet and get prepared to roll up the sleeves when it came to roto. Trust me, the team did a lot of amazing roto on this show to get that integration there and a lot of comp work, and it proved to be the best path forward.

The thing with those street scenes, also, was that when we first started dropping the CG characters in, to make them stand out, we would really try and emphasise some of their vibrancy and emphasize some of the lighting, and really try and position them in prominent sight lines.

But Rob was the one who really pushed us to strip all that out and just stick them in there as if they were just any normal crowd member on the street, rather than trying to emphasize them. That really added to the depth and complexity in those big wide city shots, but it definitely killed our compositors and roto artists!