The team working on the Keras 2 with MXNet backend recently announced their latest version, along with some tutorials and benchmarks. Most notably, if you’re already using Keras for training convolutional neural networks, you can get 2x or more improvement in training speeds. You can also use multiple GPUs to scale training with ease.

Trying out the new backend takes only a minute. First, install keras-mxnet:

pip install keras-mxnet

If you’re using GPUs install MXNet with CUDA 9 support:

pip install mxnet-cu90

If you’re using CPU-only, install basic MXNet:

pip install mxnet

If you’re already using Keras, then change Keras’s backend setting to mxnet. Otherwise, you’re good to go after the pip installations.

Then train your models with the MXNet backend and witness the speed increase! The Keras examples work out-of-the-box. To test out training at scale with multi-GPU training, run the CIFAR10 multi-GPU script. Usage of this script is covered in AWS blog post’s CNN tutorial. The script expects four GPUs, but can be updated with the number of GPUs you’re running.

The announcement also talks about flexibility:

“You can design in Keras, train with Keras-MXNet, and run inference in production, at-scale with MXNet.”

An example of how to do this is in the the project repo, where you can prototype a neural network using Keras, then export it to benefit from MXNet’s scalable inference speeds.

If you’re a Keras user, and you like where this is going, join the project, provide feedback, or pitch in on a feature you want to see. As an open source project, these great features are free to use, and are influenced and improved by open source community’s involvement. There are calls for contribution to enhance RNN support, which is currently experimental. Also, make sure you follow Apache MXNet to kept posted on new features, like details on how you can use MXNet Model Server to serve your Keras-MXNet models!