Hello LEGO Worlds fans! I’m Emilio, Rendering Programmer at TT. Last time I wrote about the procedural sky, and this time I’m going to talk about the lighting and occlusion in LEGO Worlds.



Global Lighting Summary

If you read the diary about the procedural sky, we talked about sky, fog and global illumination there. In short, in Worlds there are two main sources of light. We have sun during the day and the moon at night, while additional light coming from the sky dome is encoded in two structures: the spherical harmonic and the cubemaps.

Spherical harmonics encode time of day lighting

Once we’ve lit the world, we need to make sure light doesn’t reach some surfaces (i.e. shadows), which is the opposite problem. For directional lights we have standard real-time shadows, which work quite well. The bottom image shows an evolution of what I’ve just described. In (1) we have an image with spherical harmonics, onto which we add cubemaps (2), then a directional light (3), and finally realtime shadows (4).

The Darkness

However, global lighting coming from the sky is now leaking in places where we aren’t expecting it. This is especially true inside caves, but you can extend this problem to overhangs, soft occlusion under trees, etc. Since we didn’t have a shadowing solution for it, we started working on one.

The initial prototype worked reasonably well. At chunk (the building block in Worlds) generation time in the CPU, each brick queries its neighboring bricks and checks whether they are being occluded (by checking distances, etc.) This information is then stored in the terrain geometry. However, this approach had several big drawbacks.

a) Occlusion is computed in the CPU. We’re already quite pressed for CPU time (especially the chunk generation) and we didn’t want to add any more.

b) We had to store this information in the mesh. Because each brick computes its own occlusion in the CPU, it needs to store it somewhere. In a scene consisting of 20-30 million vertices (typical for Worlds), storing this information is roughly 20-30 MB. Not the end of the Worlds but not ideal.

c) Occlusion doesn’t affect animated objects. We’ve calculated it for the bricks, but we need another mechanism to get the occlusion for the rest of the objects (characters, props, vehicles, etc.)

d) Checking neighboring bricks can become quite expensive, we need a way to check fewer.

The initial prototype (archive image)

The second iteration of this technology proved to be much more flexible. Since we are essentially trying to occlude light coming from all directions at a very large scale, a rough approximation using a top-down height map can prove to be enough for our purposes. The reasoning is: given enough bricks, taking their heights in a radius, it should be possible to roughly know a “percentage” of occlusion from them. Also taking their heights into account, or how far away they are from a given brick can create a softer look.

Heightmap image (tweaked for viewing purposes)

Occlusion Illustration

In the image we’re trying to see how occluded the red brick is. There are 5 white bricks that occlude the red brick, one blue brick that is below and doesn’t occlude it, and there are no bricks directly above the red one. The green bricks don’t participate because they’re not the topmost bricks, and the purple one is too far away (we’ve set a brick radius of 3). So a rough occlusion value would be 5/7 = 0.714. It makes sense if you think about it, it means that 71.4% of the global light doesn’t reach the object, which is intuitively that way if you look at the image. It’s an approximation and there are edge cases where it doesn’t work as well, but because of its large scale nature the artifacts are generally hidden. This calculation is done as a post-processing phase after the global sky lighting has been applied. You can see the result in the images below (look closely at the windows in the pirate ship and the cave), plus the darkening buffer we create. The results are very coarse as we use a tiny 128x128 heightmap texture for a very large area, but the results are effective.

Let there be light again

Now that we’ve occluded the skylight, caves have become pitch black. In our quest for realism we seem to have sacrificed gameplay. Luckily, there are still avenues to explore and one of them is directly related to the caves. If you’ve played the game enough you’ll have noticed that throughout the world there are lava bricks and other types of emissive bricks. You’ll also have noticed that although they glow, they don’t shine light on the environment! Most of these bricks are inside caves, so it made sense for us to create some technology that would allow us to associate lights with bricks. There’s a big problem with this though: there can be thousands of light-emitting bricks.

Regular dynamic lights are relatively expensive; they do many calculations per light, and if lights are overlapping it becomes worse. We need a cheaper way of processing a large amount of lights. Other games have used 3D textures (volumes) for storing light in the past, and we thought this was well suited for our use case.

At chunk generation time, we identify the light-emitting bricks and create a virtual point light for each of them. At this point we have hundreds of virtual point lights, a number that we need to reduce. We know many of them are going to have roughly the same color, and they’re probably going to be clumped together and pointing in the same direction as well, so we can take advantage of that to “merge” small lights into bigger lights that roughly have the same cumulative intensity and radius as the individual lights would have. In image (1) you can see glowing bricks that don’t light the environment. Image (2) shows the final result, whereas image (4) shows the merged result of the lights. You can see that many bricks of the same color have collapsed into a single point light, but still gives very reasonable results. I’ve made the screenshot outside at nighttime, but try filling the caves with these!

All this information goes into what we call an irradiance texture. All the objects can then look up their lighting inside this texture. If you look at image (3) you can see a 3D grid encompassing the world. That’s the extension of what the 3D texture covers. Outside of that we won’t have any information. Below you can see a slice of the irradiance 3D texture for the scene above. You can see a semicircle of light, which is meant to represent how the light for that point light propagates in space.

Wrapping Up

I hope you liked this diary and understood better how we’ve done a few of the bits of technology in Worlds. Please enjoy LEGO Worlds!