For any visual effects artist, recreating a real life event is always a particularly challenging assignment. But when this event is as well documented and as deeply emotional as the 9/11 terrorist attacks, it can be a daunting prospect. When dealing with such an iconic moment in history, you should better get it just right In World Trade Center (released by Paramount on Aug. 9), director Oliver Stone recreates the 9/11 tragedy from the perspective of two Port Authority Police Department officers who were trapped beneath the collapsed towers. Since the script is based on a true story, authenticity and accuracy were of paramount importance for the filmmakers. To recreate what happened outside and inside the World Trade Center, overall visual effects supervisor John Scheele split the effects shots among four vendors: Double Negative, Giant Killer Robots, Animal Logic and CIS Hollywood.

The visual effects encompassed three different categories of shots. The first one involved recreating the whole Lower Manhattan area, including the Twin Towers, in its pre-collapse stage, and then producing the raging fires on the Towers. In the second one, visual effects artists had to reproduce the collapse of World Trade Center, as seen from the interior of the buildings. The final category entailed recreating the post collapse environment. Since the collapse itself had been seen hundreds of times on television, Stone felt that it was not necessary and probably not appropriate to recreate it digitally. Instead, he preferred to focus on the point of view of some individuals and to show the events unfolding from their intimate, but truncated perspective.

Half A Million Reference Stills

Lead vendor Double Negative had the unique task of creating effects for scenes before and after the collapses, eventually producing 85 shots. Visual effects supervisor Michael Ellis and vfx producer Andy Taylor oversaw the project, with CG supervisors Peter Bebb and Ryan Cook handling the tricky 3D builds and smoke research and development. Double Negative started working with the production in May 2005. and provided a previsualization that was used for the New York shoot in October. A team was sent to New York to gather photographic material for reference. The crew was never allowed to get close to the Ground Zero site to shoot plates, Ellis recalls. All the street scenes were actually shot in downtown Los Angeles. Greenscreens were used to block the perspectives, but we still had a lot of rotoscoping to do to be able to replace the environment with our CG Lower Manhattan. In fact, we not only had to recreate the Towers and the whole WTC complex, but also all the surrounding streets, which encompassed several blocks.

Since we couldnt shoot any plate, we decided to use the system that we developed last year for Batman Begins, in which Gotham City was basically built out from thousands of architectural still photographs. For WTC, we went out and shot over half a million stills from the area surrounding the Ground Zero site. We started by taking overall panoramic views of each street, and then moved on to photograph each building, using a 50-foot scissor lift to get a clear point of view. We shot 8.0 megapixel stills with bracketed exposures that captured 3-foot square sections of a façade. Back at Double Negative, we used proprietary software STIG to tile all the images together and remove the distortion. That allowed us to reconstruct each façade as a high dynamic range image that we could then project onto a corresponding 3D geometry built in Maya. In terms of modeling, we really focused our efforts on six or seven main buildings that were going to be seen up close. All the CG elements were then rendered in RenderMan using Double Negatives in house renderer management software dnRex.

The Twin Towers Challenge

The challenge was completely different for the Twin Towers as, for obvious reasons, no texture maps could be gathered. The structures had to be entirely recreated with geometries and shaders. The CG team faced an unexpected problem with the look of the buildings. The towers were basically two unexciting big grey blocks, Ellis notes. It turned out to be quite difficult to make them look real! CG artist and shader developer Kath Roberts spent a lot of time adding in extra detail and building up the specific way the panels reflected light. We also created building interiors that were put behind every window. They were only wide-angle photographs of some New York interiors that we had shot, but they added another layer of reality to the buildings. Also, the linear design of the façades created moiré patterns at rendering stage. These linear features had a tendency to jump as they crossed from one pixel to the next, which resulted in a bizarre strobe effect that was very hard to get rid of. Eventually, we had to render the towers at 4K resolution, and even at 8K in some cases, to reduce the problem. For the same reason, we also had to set the shaders sampling rates at very high levels. It took a lot of time to render those towers

Ellis team tackled another tricky challenge with the recreation of the billowing smoke coming out of the burning towers. The crew knew that the shots had to look exactly like the news footage that the whole world had seen When the first smoke simulations required a two-day run to produce realistic results, Cook decided to look at other options. One of them was to start building a smoke library very early on. We knew exactly what the smoke had to look like. It meant that we could start running smoke simulations right away, and not wait for any approval or artwork. By the time we got to actually create the shots, we had a nice library of simulations that we could use at will. We found that the best results were obtained when we combined many different smoke simulations, and rendered them together. We actually developed a system in which we took these simulations and created a wireframe of them. It allowed us to easily reposition each simulation without Maya having to deal with the whole simulation.

The simulations were rendered in DNB, a proprietary voxel renderer that allowed the team to work in an economical way. Had they been rendered in the regular pipeline, the simulations would have clogged up the companys entire render farm.

Of Rubble and Debris

The most demanding shot arguably was a huge pull back which starts in the midst of the Ground Zero rubble and rises through a fully CG post collapse environment, past the remains of the towers and up into the environment. The shot was broken up into three different parts an underground set, a Ground Zero set and a CG environment that were combined in one seamless camera move. The final aerial view on the environment was exceptionally complex, Ellis says. We had thousands of debris on the ground, city blocks all around, and a huge amount of dust and particles in the air Using photographic reference, we managed to build up the correct topography of Ground Zero, as well as the key structures. We first positioned large pieces of geometry, which we then dressed and covered with hundreds of thousands of individual smaller chunks of debris. It gave us the overall impression of mayhem without having to build every single piece of debris. Since the shot was comprised of some 150 layers, we had to get the camera move signed off pretty early, as each modification meant a two-week turnaround!

To create the thousands of debris and papers floating all over the area, Double Negative developed Dynamite, a rigid body simulator that plugs into Maya. Each type of debris was treated and rendered as a separate element, and then combined with the other elements in Shake. Dynamite was also used to add floating debris in many live action street shots.

The Interior Perspective

While Double Negative tackled the exterior shots, Giant Killer Robots and Animal Logic focused on interior shots of the towers collapsing. The concourse disintegration was produced by Animal Logic, with Andrew Brown acting as visual effects supervisor. The lobby and concourse environment models were based on a LIDAR scan of the set done by Robert Gardiner, a freelance scanning expert. The geometries were then textured with photographs of the real set. The crew also used locked camera projections for the shots where the floor splits open. All the elements were produced in Maya, rendered with RenderMan through the Mayaman interface, and then composited in Shake. Most 3D elements were pre-composited to a base beauty pass stage in Digital Fusion before being sent to 2D. All 3D elements were produced as floating point OpenEXR image sequences to give the Shake artists enough range to work with.

To animate the disintegration, we used two main methods, 3D lead Will Reichelt explains. The first was a sleight-of-hand trick where we would strategically break and animate the large areas of geometry (like a concrete pillar) by hand, and then fill in the gaps with animated particles instanced with smaller pieces of debris, to give the illusion of one complete structure breaking down. This worked well for areas, which werent seen particularly clearly. For areas of the set which needed to break properly, we used Rubble, a collection of proprietary tools designed to break up solid volume meshes into solid fragments with useful texture coordinates. The Rubble interface simplified the process of simulating fragments using a combination of Maya rigid bodies and the Novodex physics engine, which we could then bake into keyframes for later tweaking.

The large amounts of geometry making up the debris quickly made the scenes quite large. So, before any image could be rendered, the more complex meshes were stored as delayed read-archives. The photoreal look of the concourse was achieved by using an indirect illumination pass in conjunction with the standard diffuse, specular, reflection, ambient and incandescence layers.

Many additional live-action elements including fire, dust, smoke, ash, etc. were shot specifically to enhance the digital effects or as visual reference for what would finally become CG, 2D lead Krista Jordan says. The type of live action element dictated whether or not it was shot on blue, black or green. In the end, the larger shots had well over 200 layers each. These layers were constantly changing and being updated as the shots developed.

To develop the ash cloud, the team employed a fusion of volumetric and geometric elements. We used a combination of Maya fluids, particle simulations and custom tools designed to control localized areas of the fluid and enhance the interaction between the fluid and particle simulations. Our proprietary volumetric rendering system Steam, originally developed for Stealth, was extended and used to provide procedural detail and lighting. This was supplemented with instanced shrapnel and debris particle simulations to provide internal hard detail, boiling around the leading edge of the cloud.

On their part, Giant Killer Robots and visual effects supervisor Richard McBride combined multiple 2D and 3D elements to create the scene in which firemen run away from the blast wave of dust and debris from the collapsing Tower Two.

A Very Special Project

When one works on a project like WTC, the feeling is clearly different from creating effects shots for your regular superhero fare. There was indeed a certain degree of emotion, and the desire to get everything absolutely just right. We didnt want to take any liberty with the truth of what was actually happening, Double Negatives Ellis concludes. The whole production team was very sensitive to that. With WTC, we were basically asked to recreate a key moment in history. That made it all very special to us.

Alain Bielik is the founder and editor of renowned effects magazine S.F.X, published in France since 1991. He also contributes to various French publications and occasionally to Cinéfex. Last year, he organized a major special effects exhibition at the Musée International de la Miniature in Lyon, France.