NVIDIA currently owns a dominant position in the deep learning and AI training market. Indeed, earlier today NVIDIA showed its training prowess in NVIDIA MLPerf v0.5 Results Showing DGX-2H Power. Beyond training models, the real value is in deploying the trained models to make machines and devices intelligent. Today, the NVIDIA Jetson AGX Xavier module is launching alongside an upgraded Jetson TX2 to help companies build smart robots and devices to service a number of emerging roles.

NVIDIA Jetson AGX Xavier Module

The big news of the day is that the new NVIDIA Jetson AGX Xavier module is out. You may have seen NVIDIA Isaac Xavier Launched to Spur Robotics Revolution which included the Xavier module along with software packaged in a solution for the robotics industry. The Jetson AGX Xavier platform was launched at GTC in Japan in September 2018 specifically for the robotics industry. The next stage is making the module available so that robotics companies can use them in their own harnesses for robots.

For a more complete overview of the SoC, see our NVIDIA Xavier SoC overview. The developer kit is available for $1099 in 1000 unit quantities. While previous announcements have been for developer kits, today’s announcement is for the modules that can go into production machines.

NVIDIA Jetson Growth with Xavier

At today’s event, the company also talked about the growth of NVIDIA Jetson. As you can see, starting from a small number allows growth rates to look great. NVIDIA’s message is clear, the ecosystem is growing for Xavier as companies look to deploy the AI models they have trained.

A lot of this is due to the fact that NVIDIA knows the next frontier is putting trained AI models into every device. NVIDIA has been steadily making progress on the software side for years as well as significantly upgrading its modules.

You can read our NVIDIA Jetson TX2 Development Kit: Seven Tips for Getting Started guide although a lot has changed because of the updates NVIDIA has commited over the past few quarters.

Here is the quick overview slide for the Xavier SoC. Again, check out our NVIDIA Xavier SoC overview for a bit more on the SoC.

One of the biggest changes from the previous Jetson TX2 generation to the Xavier generation is that Xavier has vastly higher AI inferencing performance. When the original Jetson TX1 and TX2 chips were built, they were designed for everything from tablets to robots. With the Xavier generation, NVIDIA has the processing power to enable inferencing across a number of video and sensor arrays.

From an impact perspective, this is an indicative chart. Key to the chart is that the solution is changing the efficiency story making using a standard PC and standard PCIe GPU for inferencing in an autonomous robot less attractive. With the Jetson TX2 era, one could still get better performance and efficiency by using what is essentially a desktop PC. With Xavier, there are enough accelerators to flip that story.

For those who are looking for the new NVIDIA Jetson AGX Xavier module, here is the product overview:

These are now in volume production and one can configure the parts to run at 30W, 15W, or 10W modes so you can tune the modules to conserve power for battery applications or use more power for higher performance.

The 2018 Jetson Family Overview

Ending the year the NVIDIA Jetson family grew by two models. Although the main thrust of today’s announcement was for the NVIDIA Jetson AGX Xavier, there was another model added. The NVIDIA Jetson TX2 4GB model is a lower memory footprint module for those who need a lower cost product.

Final Words

This is where the industry is going. NVIDIA is pushing its edge inferencing solutions. Intel showed off its new inferencing VNNI instructions to help deploy models across its chip portfolio. Moving from a developer kit Xavier platform to a volume production product is a big deal since it means that companies can start shipping products with the newest chips.