At Microsoft our commitment is to make AI more accessible and valuable for everyone. We offer a variety of platforms and tools to facilitate this, including our Cognitive Toolkit, an open source framework for building deep neural networks. We also work with other organizations that share our views to help the AI community.

Today we are excited to announce the Open Neural Network Exchange (ONNX) format in conjunction with Facebook. ONNX provides a shared model representation for interoperability and innovation in the AI framework ecosystem. Cognitive Toolkit, Caffe2, and PyTorch will all be supporting ONNX. Microsoft and Facebook co-developed ONNX as an open source project, and we hope the community will help us evolve it.

What is the ONNX representation?

Cognitive Toolkit and other frameworks provide interfaces that make it easier for developers to construct and run computation graphs that represent neural networks. Though they provide similar capabilities, each framework today has its own format for representing these graphs. The ONNX representation provides the following key benefits:

Framework interoperability

Developers can more easily move between frameworks and use the best tool for the task at hand. Each framework is optimized for specific characteristics such as fast training, supporting flexible network architectures, inferencing on mobile devices, etc. Many times, the characteristic most important during research and development is different than the one most important for shipping to production. This leads to inefficiencies from not using the right framework or significant delays as developers convert models between frameworks. Frameworks that use the ONNX representation simplify this and enable developers to be more agile.

Developers can more easily move between frameworks and use the best tool for the task at hand. Each framework is optimized for specific characteristics such as fast training, supporting flexible network architectures, inferencing on mobile devices, etc. Many times, the characteristic most important during research and development is different than the one most important for shipping to production. This leads to inefficiencies from not using the right framework or significant delays as developers convert models between frameworks. Frameworks that use the ONNX representation simplify this and enable developers to be more agile. Shared optimization

Hardware vendors and others with optimizations for improving the performance of neural networks can impact multiple frameworks at once by targeting the ONNX representation. Frequently optimizations need to be integrated separately into each framework which can be a time-consuming process. The ONNX representation makes it easier for optimizations to reach more developers.

Technical summary

ONNX provides a definition of an extensible computation graph model, as well as definitions of built-in operators and standard data types. Initially we focus on the capabilities needed for inferencing (evaluation).

Each computation dataflow graph is structured as a list of nodes that form an acyclic graph. Nodes have one or more inputs and one or more outputs. Each node is a call to an operator. The graph also has metadata to help document its purpose, author, etc.

Operators are implemented externally to the graph, but the set of built-in operators are portable across frameworks. Every framework supporting ONNX will provide implementations of these operators on the applicable data types.

Availability

The initial version of ONNX code and documentation are available now as open source on GitHub as a starting point for the community to get involved right away. We will be actively working on ONNX and an upcoming release of Cognitive Toolkit will include support. In conjunction with Facebook, we also plan to contribute reference implementations, examples, tools, and a model zoo.

The ONNX representation forms the basis of an open ecosystem that makes AI more accessible and valuable. Developers can choose the right framework for their task, framework authors can focus on innovative enhancements, and hardware vendors can streamline optimizations. We hope the community will support ONNX to realize this exciting vision.