$\begingroup$

Real-time graphics deploys a variety of approximations to deal with the computational expense of simulating indirect lighting, trading off between runtime performance and lighting fidelity. This is an area of active research, with new techniques appearing every year.

Ambient lighting

At the very simplest end of the range, you can use ambient lighting: a global, omnidirectional light source that applies to every object in the scene, without regard to actual light sources or local visibility. This is not at all accurate, but is extremely cheap, easy for an artist to tweak, and can look okay depending on the scene and the desired visual style.

Common extensions to basic ambient lighting include:

Make the ambient color vary directionally, e.g. using spherical harmonics (SH) or a small cubemap, and looking up the color in a shader based on each vertex's or pixel's normal vector. This allows some visual differentiation between surfaces of different orientations, even where no direct light reaches them.

Apply ambient occlusion (AO) techniques including pre-computed vertex AO, AO texture maps, AO fields, and screen-space AO (SSAO). These all work by attempting to detect areas such as holes and crevices where indirect light is less likely to bounce into, and darkening the ambient light there.

Add an environment cubemap to provide ambient specular reflection. A cubemap with a decent resolution (128² or 256² per face) can be quite convincing for specular on curved, shiny surfaces.

Baked indirect lighting

The next "level", so to speak, of techniques involve baking (pre-computing offline) some representation of the indirect lighting in a scene. The advantage of baking is you can get pretty high-quality results for little real-time computational expense, since all the hard parts are done in the bake. The trade-offs are that the time needed for the bake process harms level designers' iteration rate; more memory and disk space are required to store the precomputed data; the ability to change the lighting in real-time is very limited; and the bake process can only use information from static level geometry, so indirect lighting effects from dynamic objects such as characters will be missed. Still, baked lighting is very widely used in AAA games today.

The bake step can use any desired rendering algorithm including path tracing, radiosity, or using the game engine itself to render out cubemaps (or hemicubes).

The results can be stored in textures (lightmaps) applied to static geometry in the level, and/or they can also be converted to SH and stored in volumetric data structures, such as irradiance volumes (volume textures where each texel stores an SH probe) or tetrahedral meshes. You can then use shaders to look up and interpolate colors from that data structure and apply them to your rendered geometry. The volumetric approach allows baked lighting to be applied to dynamic objects as well as static geometry.

The spatial resolution of the lightmaps etc. will be limited by memory and other practical constraints, so you might supplement the baked lighting with some AO techniques to add high-frequency detail that the baked lighting can't provide, and to respond to dynamic objects (such as darkening the indirect light under a moving character or vehicle).

There's also a technique called precomputed radiance transfer (PRT), which extends baking to handle more dynamic lighting conditions. In PRT, instead of baking the indirect lighting itself, you bake the transfer function from some source of light—usually the sky—to the resultant indirect lighting in the scene. The transfer function is represented as a matrix that transforms from source to destination SH coefficients at each bake sample point. This allows the lighting environment to be changed, and the indirect lighting in the scene will respond plausibly. Far Cry 3 and 4 used this technique to allow a continuous day-night cycle, with indirect lighting varying based on the sky colors at each time of day.

One other point about baking: it may be useful to have separate baked data for diffuse and specular indirect lighting. Cubemaps work much better than SH for specular (since cubemaps can have a lot more angular detail), but they also take up a lot more memory, so you can't afford to place them as densely as SH samples. Parallax correction can be used to somewhat make up for that, by heuristically warping the cubemap to make its reflections feel more grounded to the geometry around it.

Fully real-time techniques

Finally, it's possible to compute fully dynamic indirect lighting on the GPU. It can respond in real-time to arbitrary changes of lighting or geometry. However, again there is a tradeoff between runtime performance, lighting fidelity, and scene size. Some of these techniques need a beefy GPU to work at all, and may only be feasible for limited scene sizes. They also typically support only a single bounce of indirect light.

A dynamic environment cubemap, where the faces of the cubemap are re-rendered each frame using six cameras clustered around a chosen point, can provide decently good ambient reflections for a single object. This is often used for the player car in racing games, for instance.

Screen-space global illumination, an extension of SSAO that gathers bounce lighting from nearby pixels on the screen in a post-processing pass.

Screen-space raytraced reflection works by ray-marching through the depth buffer in a post-pass. It can provide quite high-quality reflections as long as the reflected objects are on-screen.

Instant radiosity works by tracing rays into the scene using the CPU, and placing a point light at each ray hit point, which approximately represents the outgoing reflected light in all directions from that ray. These many lights, known as virtual point lights (VPLs), are then rendered by the GPU in the usual way.

Reflective shadow maps (RSMs) are similar to instant radiosity, but the VPLs are generated by rendering the scene from the light's point of view (like a shadow map) and placing a VPL at each pixel of this map.

Light propagation volumes consist of 3D grids of SH probes placed throughout the scene. RSMs are rendered and used to "inject" bounce light into the SH probes nearest the reflecting surfaces. Then a flood-fill-like process propagates light from each SH probe to surrounding points in the grid, and the result of this is used to apply lighting to the scene. This technique has been extended to volumetric light scattering as well.

Voxel cone tracing works by voxelizing the scene geometry (likely using varying voxel resolutions, finer near the camera and coarser far away), then injecting light from RSMs into the voxel grid. When rendering the main scene, the pixel shader performs a "cone trace"—a ray-march with gradually increasing radius—through the voxel grid to gather incoming light for either diffuse or specular shading.

Most of these techniques are not widely used in games today due to problems scaling up to realistic scene sizes, or other limitations. The exception is screen-space reflection, which is very popular (though it's usually used with cubemaps as a fallback, for regions where the screen-space part fails).

As you can see, real-time indirect lighting is a huge topic and even this (rather long!) answer can only provide a 10,000-foot overview and context for further reading. Which approach is best for you will depend greatly on the details of your particular application, what constraints you're willing to accept, and how much time you have to put into it.