Ok so, first the bad news. We’re not ready to show game play stuff yet. It’s taking longer than expected and since it’s a key part in the upcoming crowd funding campaign, we’re putting a lot of effort into it. But the good news is that that we can distract you with pretty pictures

Oh you weren’t distracted… bummer. Anyway, besides the gameplay, people have been asking us about our volumetric lighting and atmosphere. So we’re going to cover that topic right now.

Why did we choose to elaborately simulate the atmosphere instead of just using sky domes? Two reasons. First, the game has a day and night cycle, so some animation would have been required anyway. Second, we wanted to light the game world with the atmosphere and really make the atmosphere a part of it. We knew that we can’t spend time lighting the game world manually, so this essentially is a procedural approach in creating variable moods and atmospheres.

We made a quick and hasty video showing some of the features of the tech, located at the bottom of end of the article. But before you scroll to it, I’ll let Mikko get serious with the technical stuff.

– Hey Mikko.

– What?

– Take it away.

– What?

– You know, the article…

– What article?

– The volumetric stuff. You know… the volumetric stuff!

– Aaaaaaa, well why didn’t you say so.

The atmosphere in Reset is divided into three distinct parts. Outermost is the clear sky that is free of any weather phenomena. Only the direction of the sun affects how it looks. The colors in the sky are determined by the scattering of light by the particles in the atmosphere.

Rayleigh scattering accounts for particles that are smaller than the wavelength of light, such as gas molecules. Mie scattering takes into account bigger particles such as water vapor and dust. As mentioned in our post In Praxis: Lighting this is implemented using precomputed lookup tables [1], making it extremely inexpensive.

Below that is a layer of clouds. The shapes of the clouds are determined by a procedural function that combines 12 octaves of Perlin noise with thresholding and other math.

We use a tiling 3D texture which has 6 octaves of noise baked into it and sample it at two different scales to get the total of 12. The sky shader marches through that volume along the view rays. The clouds are lit by the sun and the sky above them.

Fully accurate illumination would require taking into account how much light reaches each point inside a cloud from every possible direction, but that is still a bit too expensive for real-time graphics, so we approximate a bit. We take the average radiance of the sky straight above and propagate it down the cloud similar to a directional light. In addition to that we have the direct light from the sun which is naturally handled as a directional light as well. Both lights take into account multiple forward scattering similar to [2]. Sunlight uses something similar to Opacity Shadow Maps [3], while skylight approximates the amount of cloud between the point being shaded and the light source using a dynamically updated height map of the cloud layer. We warp both maps in creative ways to get them to cover the entire sky all the way to the horizon with decent resolution.

Below the clouds is where all weather effects take place. Pure air and variable amounts of fog (Rayleigh and Mie scattering respectively) receive light from the sun through holes in the cloud cover and also from all over the environment in the form of dynamic directional ambient lighting, as described in In Praxis: Lighting.

A 3D texture is warped to fill the view frustum and dynamically updated with the density of air and fog at each texel. Each texel of the resulting volume is illuminated independently into a second 3D texture. Finally the illumination is accumulated into a third 3D texture so that each texel contains the amount of light scattered towards the camera along that direction and up to that distance.

This is equivalent to ray marching, but due to the texture being warped to fit the view frustum, the implementation is as simple as iteratively summing each Z-slice of the 3D texture with the previous slice. A fourth 3D texture with the same mapping contains the amount of light reaching the camera after removing absorbed and out-scattered light. That’s a lot of 3D textures, but they have extremely low resolution. The scattering gets applied on top of the full resolution geometry by sampling the appropriate 3D textures at the screen position and depth of the pixel.

And here is the video. The tech is still work-in-progress with know issues, for example the volume shadow resolution (aliasing) needs some work. The tech is designed to work well close to the sea level, but in the video the camera is lifted almost to the cloud level to show you how the volume works. The tech begins to break down the higher the camera is lifted.

[1] Bruneton, E. and Neyret, F., Precomputed Atmospheric Scattering, Computer Graphics Forum, Volume 27, Issue 4, pages 1079–1086, June 2008.

[2] Harris, M. and Lastra, A., Real-Time Cloud Rendering, Computer Graphics Forum (Eurographics 2001 Proceedings), 20(3):76-84, September 2001.

[3] Kim, T-Y and Neumann, U., Opacity Shadow Maps, Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages 177-182, 2001.