On stage at the company's GPU Technology Conference (GTC) today, CEO Jensen Huang described self-driving as "probably the hardest computing technology we've ever encountered." But, after the Uber accident, he says he was reminded just how important this work is. "We have to solve it step by step by step," Huang said. "We're dedicating ourselves to this problem. The grandest of computer problems."

On one end of the Constellation system is a server running NVIDIA's Drive Sim software. As you'd probably guess, it simulates all of the technology you'd find on a self-driving car, including sensors, cameras, radar and lidar (the light and distance measuring component). It's powered by the company's GPUs, each of which creates its own stream of sensor data. The simulation server can also render "photoreal data streams" to reflect all sorts of driving conditions, like a fierce blizzard or glare during a sunset.

Another server is powered by the company's Drive Pegasus software, which runs all of NVIDIA's autonomous car technology and processes the incoming sensor data. The Pegasus server sends its responses back to the simulation machine for validation. That feedback loop occurs 30 times a second, according to NVIDIA.

Using this dual server setup, car makers will be able to construct all sorts of extreme scenarios to see how their self-driving algorithms react. The obvious drawback is that it's tough to simulate every potential issue, but it's still better than relying entirely on real-world testing (especially since it can easily be run millions of times per day).

Like any machine learning algorithm, NVIDIA's self-driving technology will only get better with every simulation. At CES, the company unveiled its Xavier system-on-a-chip, which will let other companies quickly build up their own autonomous vehicles. NVIDIA says early access partners will get access to the Drive Constellation platform in the third quarter of this year.