To explain the difference between pre-rendered and real-time CGI, let me quickly take you back to the early days of hand-drawn animation.

Many of the cherished classics from this age, like Pinocchio (1940), were created by having an artist sit down to draw a series of pictures, and then have those pictures played back quickly after one another to create the illusion of continuous motion. Because the amount of time it takes to draw each frame is much longer (let’s say 5 minutes) than the amount of time that each frame is displayed (1/30th of a second, in the case of 30 FPS playback), interactivity is impractical—you might ask for a character to make a movement in the scene that requires 1000 frames and it would take more than three days just to draw the frames for that movement. However, if the artist can drawn simply enough—like a stick figure flip-book animation—they may be able to produce your desired result in a matter of minutes.

This is very much like the difference between pre-rendered and real-time CGI. Imagine now that instead of an artist drawing each frame, a computer is doing the drawing. A very detailed frame may take the computer five minutes to draw, depending upon how powerful it is. At a drawing (or ‘rendering’) rate of one frame per five minutes, interactivity is still impractical because you may ask for a change that takes several minutes to be produced. It makes more sense to plan what you want the computer to draw ahead of time, then simply gather up all of the frames after they have been drawn and then play them back to back to create a sense of continuous motion. This is pre-rendered CGI: the frames are drawn (‘rendered’) before (‘pre’) viewing.

For this reason, pre-rendered CGI is the preferred method for creating very high fidelity imagery, like that seen from major animation studios like DreamWorks and Pixar. The graphics in films from these studios far surpasses what your computer or Xbox can render because pre-rendered films have the time to spend many seconds, minutes, or even hours on each frame, so long as you will view them all strung together at some point in the future. But that means no interactivity because the viewer is not there during the rendering process to dictate the action (and even if they were, it would take to long to see the results).

See Also: DreamWorks Reveals Glimpse of 360 Degree ‘Super Cinema’ Rendering for VR Films

But if you make the frames simple enough (like the stick figure flip book animation) such that the computer can draw many per second, you can reach a level of practical interactivity where you can ask the computer for a change and see the resulting frames nearly instantly. This is real-time CGI: the frames are rendered as fast as they are being displayed (‘real-time’). Real-time CGI opens the door to interactivity, like being able to pick up a virtual object or press a button to open a hatch; depending upon the user’s input, new frames can be drawn quickly enough to show the result of an action instantly, rather than minutes, hours, or days later.

For this reason real-time CGI is the preferred method for creating games. Interactivity is a crucial element of gaming, and thus it makes sense to bring the graphics down to a point that the computer can render frames in real-time such that the user can press a button to dictate the action in the scene.

So simply put, pre-rendered CGI excels in visuals while real-time CGI excels in interactivity. They’re not fundamentally different except for how long it takes to draw the action.