Fans have been floored by the beautiful restoration of Star Trek: The Next Generation – and the fantastic clarity the new 2K scans of the original 35mm photography reveals. The show’s remastering has resulted in a wealth of information being made public about how TNG was originally filmed. A number of readers have asked about some very specific shots in episodes when the picture quality seems to drop on rare occasions, typically on moving shots with visual effects.

Preview images of “The Child” generated some concern from fans relating to

the softer image quality seen in pre-release screencaps.

The answer to this is – not surprisingly – the particular way some of these visual effects shots were originally photographed and completed in post production. Putting visual effects into a live-action plate is hard enough when it’s stationary, or “locked off,” but when the director needs the camera to move in order to reveal something to the audience at a particular moment – or to follow a character’s movement on screen – the level of difficulty increases dramatically.

Visual Effects Supervisor Rob Legato was presented with exactly this kind of difficult live-action shot early on in Season One with the episode “Code of Honor“. In the episode, Tasha Yar had to be seen entering the holodeck, crossing the room from right to left and grabbing an aikido uniform that materializes on the wall in front of her. While it could have been filmed in a single, static wide master the details would have been hard to see, especially on the old cathode ray tube television sets viewers were watching the show on in 1987.

Back in the late 1980’s, before the advent of tracking software that could automatically analyze a series of moving images and derive x, y or z-depth spatial information from them in order to composite an element or place a 3D object in the scene, filmmakers had to use expensive, large and quite loud motion control camera systems that could perfectly execute the same camera movement again and again via computer for each element needed in the final composite. This is, in fact, exactly how the miniatures for TNG were filmed, on a motion control stage with motion control camera rigs.

For whatever reason – most likely lack of money and/or time – the “Code of Honor” shot could not be easily filmed this way. Legato’s solution was simple and sort of genius: film the shot in widescreen!

The widescreen footage was adjusted in post-production to fit standard television screens.

In 1987, widescreen movies filmed with anamorphic “scope” lenses in the 2.39:1 aspect ratio (most of the Star Trek films have been shot in this format) were customarily transferred to video in a process called Pan & Scan. Because televisions back then were made in an aspect ratio of 1.33:1, the only way to fill the screen without distorting the image was to reposition the frame shot by shot and to “pan and scan” from side to side if two or more important objects or actors happened to be on either end of the same widescreen composition.

By filming with this process in mind, Legato knew he could capture the beginning and ending points of the intended camera move in two static, “locked off” plates — one with the actors and the aikido uniform on the wall and one clean plate without actors or uniform. And because the camera would never actually move, the split-screen and animated matte/dissolve that would reveal the uniform on the wall would be much easier to achieve. And so it was.

The cloudy dream sequences in “The Battle” are made even hazier.

The success of this shot led to further opportunities for Legato to use the process on other Season One episodes like “The Last Outpost” and “The Battle“, as well as the Season Two episodes “The Child” and “Samaritan Snare” — all using various types of effects and levels of complexity. The process would continue to be used for several more seasons.

The only drawback is that anamorphic “scope” lenses are not the most ideal optics to shoot visual effects with due to the greater number of glass elements in them making them “slow” (needing more light), their tendency to flare, to produce a cylindrical distortion at wide focal lengths, and for out-of-focus objects to be blurred more vertically than horizontally (due to the cylindrically-shaped lens element which squeezes the image onto the film). This is why spherical prime lenses are preferred for VFX work.

This sequence from “Samaritan Snare” shows just the slightest amount

of “fish-eye” lens curvature near the edges of the frame.

Because of these issues and the need to dramatically zoom into the widescreen image (which needs to be either compressed vertically or stretched horizontally by a factor of 2) in order to perform the TV Pan & Scan within the 2.39:1 frame, it unfortunately makes the footage appear soft and grainy. In addition, the simulated camera move the process uses adds a good deal of motion blur which serves to make these types of shots even softer.

Even so, it was a fairly clever solution at the time. Rob Legato would later switch to motion control rigs when the camera moves were more elaborate or the money allowed it, such as in the Deep Space Nine pilot “Emissary“, but he would again employ this widescreen process for his DS9 episode “If Wishes Were Horses“.

Two Daxes, one frame: a return to pan-and-scan in “If Wishes Were Horses“.

Stay tuned to TrekCore as we bring you more in our continuing series looking back at the production of visual effects on The Next Generation. What do you think about the use of widescreen production footage on TNG, and its effect on Blu-ray picture quality? Let us know in the comments below!