Hi! I’m Matt Wilde, an old man from the North of England who has worked in visual effects, lighting, and rendering for games since the last century. Most recently, I worked on Variable State’s Virginia. Previously I was responsible for blood, magic, and urine in games as diverse as The Lord of The Rings Aragorn’s Quest (magic/blood), The House of the Dead: Overkill (blood/urine), and Dancing with the Stars: The Official Game (all of the above). Now I’m contributing VFX and rendering to In the Valley of Gods at Campo Santo.

Putting together our announcement trailer provided plenty of challenges, but one thing I spent a fair chunk of time on didn’t actually make the final cut: a scene where Zora and Rashida wade through an ancient flooded passageway.

The starting point was this thumbnail sketch from art director Claire Hummel:

To bring the scene to life, we’d need nice-looking water, which wouldn’t be convincing if it didn’t react to the motion of the characters and surrounding geometry. A game that does this well is Resident Evil 7 (especially if you’re a fan of floating corpses, like I am).



To this end, graphics programmer Pete Demoreuille (who apparently does exist even though he doesn’t have a Twitter profile) created a GPU-based simulation using a “shallow-water” approximation. It’s a little more accurate than traditional video game techniques, as it accounts for the water’s depth and computes its horizontal velocity along with height. For collision with the characters and the world, a “signed distance field” can be precomputed for the static environment, and characters are added in per-frame by attaching primitives (capsules, in this case) to bones in their rigs. Got it?

The end result is a number of dynamic textures which are fed into the shader for the water surface’s height, normal, velocity, and distance from a blocking object. Timo Kellomäki’s work on water simulation in games is a great reference.

With these at my disposal, I set about making an actual shader, starting with a simple flow mapping texture—the flowing determined by the simulation. The output is brightened depending on factors like surface normal and velocity. The below was captured right out of the Unity editor and was immediately fun to play with. Imagine the capsule is a rubber duck, like I did. For about a week.



I gradually built this into a more watery-looking shader with the addition of normal mapping, depth-based transparency, caustic lighting effects, and probably some other things.

By this time the passage scene contained some first pass environment modelling and character animation, so I could try the shader out in situ. But first, Claire produced this handy style guide broken down into layers.



Isolating each element of the material was really useful in getting the final combined effect to work as we hoped it would. This is how it looked with the breakdown recreated in the shader:



If you’ve worked with Unity shaders, you may appreciate that getting shadows to project onto a translucent surface is quite challenging. But I think it was worth the effort to enable the subtly visible geometry under the surface, fogged and blurred by depth.



With the characters and colliders added, the scene was as complete as it was ever going to get. The environment, character models and animation would all be updated in time, and I had plans to add particle splashes, water dripping from the ceiling, and a way to allow the characters to appear to get dynamically wet. But then, the devastating news.

The shot had been cut from the trailer.

Not one to take this kind of thing badly, I quickly brushed it off and it was really no more than a few months and a Balinese yoga retreat later and I was eagerly anticipating my next challenge. Dust motes? Oh no that’s great. Bring it on. I love dust.

