For the past 20 years, the Cassini spacecraft has been exploring the Saturn system, beaming incredible data and imagery back to Earth. To commemorate the end of its journey—its planned destruction in Saturn’s atmosphere—we created an interactive retrospective of Cassini’s years of active exploration. We wanted to visualize Cassini’s journey with 3D graphics as it made its many orbits around the Saturn system, with stops along the way corresponding to the locations where Cassini captured specific images. At each stop, we would show the image it captured and explain its place in Cassini’s contribution to our scientific understanding. This was to be a harmonious marriage of graphics, photography, and text across desktop and mobile browsers.

As the graphics editor on this project, I needed to figure out how to acquire and process the data that’s fundamental to an accurate 3D visualization of the objects relative to Saturn. The data includes:

The orbits of a selection of Saturn’s many moons

Cassini’s position over 13 years

The exact moments when Cassini was capturing our selected imagery

The positions of the objects in the imagery captured at those moments

That’s what this article is all about: how I got this NASA data and the steps I took to make it the basis of a web-based, WebGL-via-three.js 3D graphics presentation. I’ll be your captain on this journey. Step aboard the Starship Internet.

Modeling Moon Orbits

In my final 3D graphics, the orbits of Saturn’s moons are represented by what appear to be white-ish ellipses, but they’re not true curves. It’s actually more a game of connect the dots—they’re hundreds of data points in space connected by lines (meshlines). They look like a smooth curve because I downloaded enough data for them to appear smooth at the scales I’m visualizing. That’s also the case for Cassini’s hairball (more on that later).

Showing these orbits with actual data was not my first approach, though it should have been. I suspected I would have to force the user to download a lot of data to draw a dozen or so moon orbits, and it scared me off before I actually did some visual tests. So I initially tried to model the ellipses mathematically as true curves using each moon’s orbital parameters, or, all the variables that describe an ellipse in 3D space. I took these parameters and fed them into an ellipse generator that consumes these variables. I looked to really helpful projects like sol-sys and jsOrrery (which powers this site) on how to do that. I’m no rocket scientist, so these projects were a huge step forward.

But when I combined the modeled ellipses with a data point in time from NASA (an X/Y/Z object in 3D space), representing a moon’s position when it was imaged, things didn’t line up. With this mismatch, these ellipses were useless for visualization. There’s a lot of reasons why things could have been misaligned, and for many hours (days?) I tried to figure it out. Maybe the ellipses were rotated in 3D space, as if they started orbiting Saturn at a different point in eternity? Perhaps the modeling was fine and I was downloading data with the wrong parameters from NASA for Cassini’s orbit? Maybe the modeling had a larger margin of error than I had anticipated. I ultimately threw in the towel and moved on to Plan B, and to the real meat of this project: real data everywhere.

Connect the Dots

My fears about the amount of data needed to smoothly plot a moon’s orbit turned out to be unfounded. Moving on to some visual tests it turned out I really only needed about 150 dots to plot a smoothly curved ellipse, given my visual scale. 150 points, multiplied by the 12 moons visualized yielded a final binary-compressed JSON of 115kb. That’s like the size of an image. I’ll take it.

So what is this data? It’s data freely available from the NASA JPL HORIZONS Web Interface. Take it away JPL:

The JPL HORIZONS on-line solar system data and ephemeris computation service provides access to key solar system data and flexible production of highly accurate ephemerides for solar system objects ( 739282 asteroids, 3480 comets, 178 planetary satellites, 8 planets, the Sun, L1, L2, select spacecraft, and system barycenters ). HORIZONS is provided by the Solar System Dynamics Group of the Jet Propulsion Laboratory.

Okay whoa. But there’s a few terms to define here: “ephemerides” is the plural form of “ephemeris”, and an ephemeris describes the positions and velocity of an orbiting body over time. A “barycenter” is the center of mass of a system of bodies, or, a point that other objects orbit in space.

To recap, the HORIZONS system contains precise data on the position of hundreds of thousands of objects in our solar system, positions in the past, present and future. If desired, please take a moment to release a bellowing nerd-scream of joy into the cosmos.

The HORIZONS system can be accessed using any of the following methods: * telnet * email * web-interface

Did they just say telnet? I remember the days of telnet—I used to watch over my older brother’s shoulder in the early 90’s and see him connect to systems via telnet to do who-knows-what—it’s how I learned how to use the command line. It’s all very romantic, but I did not go the telnet route with this project! I experimented my way through HORIZONS’ web interface (HTML forms and buttons), using the tutorial docs, and full documentation to help get the data that was going to work for my 3D graphics.

Here’s the settings I used to get the orbit data for Saturn’s hazy moon, Titan, and I repeated this for each of the moons I wanted to show.

Ephemeris Type: VECTORS Target Body: Titan (SVI) [606] Coordinate Origin: Saturn (body center) [500@699] Time Span: Start=JD2453005, Stop=JD2453020.945421, Intervals=150 Table Settings: quantities code=1; labels=NO; CSV format=YES; object page=NO Display/Output: download/save (plain text file)

To break that down, the vectors setting is what gives X, Y, Z coordinates—positions that describe locations in our 3D reality, things that are placed left or right, up or down, near or far. But relative to what?

The vectors describe the location of the target body chosen, relative to the coordinate origin. In this case it’s Saturn’s body center, the geometric center of the planet.*

I now have my space coordinates of choice, but I also need to consider time. How frequently and over what time span do I want the data points? As mentioned earlier, after some experimentation, 150 locations per moon would form a nice curve. But for each moon, I want 150 points only within a single orbit of a moon’s year, no overlap. That’s a specific time span to specify for each moon, as all moons move at different speeds. Luckily the HORIZONS is pretty flexible with date format.

One of the orbital parameters of Saturn’s moon’s is “orbital period”, a number in Earth days for the time it takes for a moon to complete an orbit. I took the orbital parameters and put them in a spreadsheet, and created two new columns, “date start” and “date end”. The HORIZONS system accepts Julian dates, or the number of days since November 24, 4714, BC. It’s kind of like Unix time but more astro. I set an equal start date for all the orbits, JD2453005 (Dec 31, 2003), a mildly arbitrary start point that occurred during Cassini’s dance with Saturn. For the date end, I just add the orbital period to that number. A few more columns to build up: some JavaScript-ish I can copy and paste into my browser console to more quickly fill out the HORIZONS form with the right dates as I repeat the process for each moon. Not quite automation, kind of a spreadsheet hack.

="$(""form input[name='start_time']"").setAttribute('value',""" & I5 & """);" & "$(""form input[name='stop_time']"").setAttribute('value',""" & J5 & """);"

$("form input[name='start_time']").setAttribute('value',"JD2453005");$("form input[name='stop_time']").setAttribute('value',"JD2453005.9424218");

Here’s a spreadsheet formula for moon Mimas:And the result:

So I set the dates in the form, and specify a step size: 150, “equal intervals (unitless)”. I’m telling HORIZONS to give me 150 locations between my start and end dates. They’re awkward increments of time, but I really only care about how many steps, not human-usable rounded increments.

Lastly, I needed to specify some table settings. There’s an option in the settings called “reference plane.” It’s worth reading the documentation for this and wrapping your head around it—the default reference plane is relative to the Earth’s equator, rather than the body chosen, and given that all planets have a different axial tilt, I needed to account for this if I wanted Saturn and surrounding data points to be parallel and not tilted with respect to the browser viewport.

I chose the “CSV format” option in the table settings, and I tried to keep metadata to a minimum. This does get me CSV data, but it’s in a text file with a bunch of other non-CSV data. Classic NASA. That means I’m going to need to take a few more steps to extract what I need out of those files to convert them into something readily parsable.

Here’s a Python script I wrote to extract only the CSV data, clean up the date format, and possibly dump it out to JSON. Why am I writing Python in a big JavaScript graphics project? Basically I’m forcing myself to get better at Python by learning through projects like this. It’s working!

* Sidenote: For my “coordinate origin,” it’s important to note that I could have chosen Saturn’s barycenter (planet code @6 ), which is the point in space that all of Saturn’s satellites orbit. In many cases this would be the right choice for visualization, but for this project, using the body center reduced the amount of things that I needed to model in my rendering code, for simplicity, browser performance, and meeting deadlines. But most critically, after some tests I was able to see that using the body center left no perceptible difference on the locations of the moons and Cassini orbit–the difference in location between Saturn’s center and the barycenter are just extremely close to each other at the graphic’s chosen scale. There’s a good intro to this concept here, showing how all the bodies are pulling on each other.

Cassini’s Hairball

Unlike the moon’s orbits, I never attempted to model Cassini’s movements mathematically. Maybe it’s possible with some sort of combination of parabolic and hyperbolic trajectories, but I’m way out of my depth there. Also, we’re talking about up to 20 years of trajectories. Best to download the precomputed data from the trusty HORIZONS system.

The idea with visualizing Cassini’s movements was to show a curve of cumulative travel as it made its way throughout the Saturn system, fading out each “chapter” of the curve as we proceed towards each highlighted image it captured. This was to reduce the mess of a hairball that would result from drawing each curve on top of the rest as the mission progressed. For a continuous curve like this, I would need to connect the dots like I did for the moon’s orbit.

I knew what time-span I needed according to the story we were telling—from Cassini’s entry into Saturn’s orbit in 2004, to Cassini’s demise on September 15, 2017, about 13 years of data. Figuring out what time-step I needed—how closely the points along its path should be spaced—took a lot of experimentation. To get a reasonably smooth curve when my 3D camera was close to Saturn, I ended up needing points every 10 minutes to prevent an angular look or lines intersecting through the the planet when Cassini was moving at its highest speeds due to its proximity to Saturn and its intense gravity.

For maximum flexibility, I tried to download 10-minute-increment data for the entire 13 years I needed with the following settings (plus a few years extra at the beginning in case I needed it):

Ephemeris Type: VECTORS Target Body: Cassini (spacecraft) [-82] Coordinate Origin: Saturn (body center) [500@699] Time Span: Start=2001-Sep-16, Stop=2017-Sep-15 15:59:00.0000, Step=10 m Table Settings: quantities code=1; labels=NO; CSV format=YES; object page=NO Display/Output: download/save (plain text file)

This led to:

*** Horizons ERROR/Unexpected Results *** Projected output length (~841536) exceeds 90024 line max – change step-size

Apparently HORIZONS has a limit on the amount of data you can download at once. I took a deep breath, wishing I’d written a script at this point, and repeated the process 16 times, setting my time spans to each consecutive year I was interested in. This yielded about 100mb of data, and 841,536 data points across 16 files. That was way too much to expect someone to wait to download, especially for a data plan on a phone. Rendering 841,536 data points was also going to be pretty computationally taxing on browsers, even with the might of WebGL, unless I was being really smart about everything, which I was definitely not.

The answer to this data glut was recognizing that I only needed 10-minute intervals for my Saturn closeups on big screens. For non-closeups I could reduce my data density. So I wrote a Python script to do a few things.

Take my 16 text files, extract the CSV from each like I did for the moon orbits

from each like I did for the moon orbits Read a JSON of settings that spell out the dates where the 12 images in my interactive correspond. This establishes time spans to split up my data. Along with the time spans I also have desired intervals. For example, for when I’m showing Cassini leading up to the storm on Saturn, I want points every 80 minutes on desktop, every 120 minutes on mobile. e.g:

{ "id": "storm", "to": "2011-02-25 08:36:00", "interval": 80, "intervalMobile": 120 },

Take those settings, split my data in to groups, sample each group according to the interval rate and output a JSON . I also convert this JSON to msgpack, which compresses it into a binary that I can decompress in the browser.

. I also convert this to msgpack, which compresses it into a binary that I can decompress in the browser. Run this script (shown below) twice to output data for desktop and mobile, mobile needing less-dense data because it will be more zoomed out.

This processing took my 100MB of text files and outputted a 10MB JSON (6.5MB msgpack) for desktop, 6.3MB JSON (4.1MB msgpack) for mobile. This was a size I could live with and was a very necessary optimization to keep things loading fast and rendering smoothly.

Annotating Moons & Cassini’s Positions

All of the images you’ll see in the Grand Tour also have an official page on NASA’s site with their official release information, describing the image and what day it was taken. For example, here’s Titan’s image, described as captured on Nov. 26, 2009. Using that date, I extracted additional information from HORIZONS—positions of Titan and Tethys at the moment of that image—to add more context to the visualization in the form of annotations. But when I initially plotted these positions, there was something of a misalignment: I expected the moons’ locations with respect to Cassini to very closely match the positions of objects in the images themselves.

I chalked up this misalignment to only having single-day granularity (Nov. 26, 2009, no time) given that the objects in space move around quite a bit in the course of a day. If I could figure out when the images were captured down to the minute then things should snap into place.

It took a bit of digging around to find these exact moments. NASA has a site that has the raw imagery from Cassini’s entire mission, and each image has a “Taken” and “Received” field with a date/time down to the minute. I clicked around the images in the course of a day to find the image that matched the official NASA-processed one. One key issue: needing to be sure about if/how these times were representing time zones and daylight savings time. I dug a little deeper and found a clue. According the Chrome Developer Tools’ network tab, there’s a request that gets made when paginating through image results. The result of the request is a JSON array that contains the metadata of the raw images, and each image has an “observe_date” and an “earth date”. Both of these are standard IS0 formatted timestamps with timezone offset information (2009-11-26T12:17:00.000-08:00). With a timezone offset, I could then be sure that I’m requesting the correct date from the HORIZONS system.

For the “vector”-type data the HORIZONS system expects and outputs “Barycentric Dynamical Time” (TDB), which was new to me. The short of it is that it’s similar to UTC, but offset about 32 seconds for astronomical reasons beyond my comprehension. This was an offset I could account for, but it’s also a margin of error that would also be nearly imperceptible at my scale of visualization.

So I downloaded this time-aligned data for each non-Saturn object in each photo we highlighted, and visualized the positions with spheres and annotations according to page scroll. I also instructed the Cassini hairball to stop at at the precise moments of capture. Sure enough, the contents of the images then perfectly matched the alignment of the annotated objects. Seeing everything align was a huge triumph.

Viz Paths Not Taken

There’s a lot more to talk about in this project than just the data considerations, but I wanted to offer a sense of combination of data types, data processing techniques, and the level of effort required to bring this data to the browser in a way that’s going to work well across devices.

We could have made this project with pre-rendered video by taking the data into a 3D graphics package like Blender or Cinema4D. It would have overcome certain WebGL rendering quirks and reduced the complexity of the data processing effort because the download size of input data wouldn’t have been a constraint. It also would have likely reduced production time by not needing custom code for camera control, or for responsive 3D graphics rescaling to name a few areas. There certainly would have been a decent way to pull this off with videos.

I went the browser-native 3D graphics route because the web is my medium, and I’ve spent a lot more time with JavaScript than 3D animation packages, although I would love to learn them better. I also believe that working with the data and code myself to experiment with possibilities is more productive and a better future investment in my skills than to have hired out a contractor to produce a 3D video. Having now done three 3D browser-based projects in a row, I can see how I’m more comfortable figuring out 3D problems, and I’ve been able to use each project’s code as a stepping stone to the next.

Most fundamentally, the best way to ensure that the graphics were a first-class citizen to the presentation—that text was maximally legible, and that 3D renders filled screens across devices—was to do a browser-native rendering. So that’s what I did.