Once I have selected a dataset to work with, I download the highest resolution DTM and imagery data available. The orthoimagery data is formatted as a JPEG2000 file with project-specific extensions, which means it won’t work in gdal, ImageMagick, and rarely in Photoshop without plugins. The easiest method to work with it is by downloading HiView, the HiRISE team’s custom JPEG2000 viewing tool, and using it to convert the image into a TIFF file. If the image resolution is too high, HiView won’t export at the full size. This is okay, since I need a resolution that I can easily divide into chunks less than 8192 pixels per side (a limitation of my graphics card when rendering) that won’t require too much memory when rendering.

I do the chunking of the HiView TIFF file with ImageMagick, using the convert+crop+repage command. The DTM comes in a IMG format which gdal can handle, so I run gdalinfo to get the data min/max, then convert it into an unsigned 16-bit GeoTIFF, scaling that min/max between 0 and 65,535. I run the ImageMagick convert command on that to split it up, but adding a depth option to maintain the 16-bit values. The 16-bit TIFF is important: If you only use 8 bit (default) then you’re binning into 255 possible values leading to a very terraced and fake-looking model, which I’ve noticed is a common mistake made by people modeling terrains.

Also, the number of tiles made from the DTM data need to match the number made from the orthoimagery; this is a reason why I need to split the input data into chunks smaller than 8192. Again, that’s the limit of what my renderer and graphics card will allow me to use. This would be quite limiting if it were not for UV tiling in Maya.

When I create the scene in Maya, all I’m really making is one long polygon plane mesh with dimensions that match that of the data. I generally use 128x128 polygon subdivisions. Then it’s really just a matter of setting up a material for the mesh. I render using Mental Ray in Maya, so I use mia_material_x_passes with the Matte Finish preset. I bind the orthoimagery to the diffuse input and the DTM data to the displacement input and set up the UV mapping. I run a test render and make sure it works. If it does, I can go and tweak the displacement scale, which I tend to exaggerate to bring out detail.

Once a good camera angle is found, I’ll boost the mesh’s render subdivisions to 4, initial sample rate to 32, and extra sample rate to 24.