What’s New: Intel’s nGraph Compiler, a framework-neutral deep neural network (DNN) model compiler, is now open source. Data scientists working with TensorFlow on Intel® Xeon® Scalable processors can use Intel’s newly released simplified bridge code to deliver up to 10x performance improvements over previous TensorFlow integrations.

“Finding the right technology for AI solutions can be daunting for companies, and it’s our goal to make it as easy as possible. With the nGraph Compiler, data scientists can create deep learning models without having to think about how that model needs to be adjusted across different frameworks, and its open source nature means getting access to the tools they need, quickly and easily.” – Arjun Bansal, VP, AI Software, Intel

Why It’s Important: The nGraph Compiler allows support for multiple deep learning frameworks while optimizing models for multiple hardware solutions. It is the latest addition to Intel’s artificial intelligence (AI) portfolio, a lineup of the new technologies AI demands to move from theory to real-world success. The nGraph Compiler provides freedom of choice in frameworks and hardware to data scientists. It lets framework owners add unique features with much less work, allows cloud service providers to more easily address a larger market demand, and helps enterprises maintain a consistent experience across frameworks and back ends, all without performance loss.

Currently, the nGraph Compiler supports three deep learning compute devices and six third-party deep learning frameworks: TensorFlow, MXNet, neon, PyTorch, CNTK and Caffe2. Intel will continue to add frameworks and devices in the coming months.

Learn More: Visit with Intel at Intel AI DevCon in San Francisco on May 23 and 24.

More Context: nGraph: A New Open Source Compiler for Deep Learning Systems (Blog) | High-performance TensorFlow on Intel Xeon using nGraph (Blog)