No wow moments, no bells, and no whistles. Jensen Huang has delivered some groundbreaking keynote speeches in his years at the helm of NVIDIA, but today’s was not among them. When the US chip giant’s Co-Founder and CEO took to the stage at the San Jose State University to kick off the company’s annual GPU Technology Conference (GTC) he did not announce a new graphics card, nor did he unveil a rumoured (and long-awaited) new 7nm GPU architecture.

Huang did however have something up his sleeve for AI developers and data scientists: CUDA-X AI, an end-to-end platform that combines all NVIDIA libraries into one bundle to streamline and accelerate data science workflows by as much as 50 times.

Since its founding in 1993 NVIDIA has built up various libraries and tools to help data scientists more quickly train and deploy AI models using GPUs. For example, cuDNN is a GPU-accelerated library of primitives for deep neural networks, and TensorRT is a GPU-accelerated neural network inference library for building deep learning applications. There are also countless tools outside NVIDIA’s infrastructure that researchers can use to speed up AI workflows, such as TensorFlow as a machine learning library and SageMaker as an Amazon Web Services model deployment tool.



CUDA-X AI is designed to pack dozens of NVIDIA GPU-acceleration libraries, ranging from data processing to model implementation, into a one-stop shop. The idea is to reduce friction between different steps in the workflow and maintain consistency throughout the evolving AI development process.

Huang even coined a term for this innovation: Programmable Acceleration of multiple Domains with one Architecture, or PRADA.



“Wherever in the stack, you want to code that’s great; you want to use domain-specific libraries, or AI framework and software packages, it’s all good for us,” VP and General Manager of NVIDIA Accelerated Computing Ian Buck told Synced.



A key component of CUDA-X AI is RAPIDS, a GPU-acceleration platform for data science and machine learning which enables end-to-end data science and analytics pipelines running entirely on GPUs. Incubated by NVIDIA for years, RAPIDS features low-level compute optimization, GPU parallelism and high-bandwidth memory speed.



Microsoft has shown a keen interest in RAPIDS. Also announced today was Microsoft Azure Cloud Service’s adoption of NVIDIA RAPIDS. The advantage is obvious, as Microsoft claims an impressive 20 times performance speed up using four NVIDIA GPUs and RAPIDS for model training compared to traditional CPU solutions. Another early adopter is Walmart, which uses RAPIDS to improve the accuracy of its forecasts.



CUDA-X AI supports major deep learning frameworks such as TensorFlow, PyTorch and MXNet; and will be integrated into all the data science workstations and the NVIDIA T4 servers announced at GTC today.



The NVIDIA GTC 2019 runs through Thursday March 21 in Silicon Valley.