Werner Vogels, CTO of Amazon, announced yesterday that the MXNet deep learning framework would become "[Amazon's] deep learning framework of choice."

Choosing MXNet might come as a surprise to some, given the number of other frameworks -- TensorFlow, Theano, Torch, or Caffe, to name a few -- with far broader name recognition. Amazon claims it chose MXNet because it scales and runs better than almost anything else out there, but other motives may also be at work, too.

Travels light, works hard, scales well

MXNet caught InfoWorld's attention earlier this year as one of the Open Source Rookies of the Year 2016. Among its notable attributes are its compact size and cross-platform portability, both of which Vogels praised: "The core library (with all dependencies) fits into a single C++ source file and can be compiled for both Android and iOS." Developers can also use a wide variety of languages with the framework -- "Python, C++, R, Scala, Julia, Matlab, and JavaScript," as cited by Vogels.

But Amazon was likely most attracted to MXNet's scalability. Vogels published benchmarks for MXNet's training throughput using the Inception v3 image analysis algorithm and claimed that the speedup obtained by running it across multiple GPUs was highly linear. Across 128 GPUs, MXNet performed 109 times faster than with a single GPU.

It's safe to assume Amazon's long-term plans for MXNet include monetizing it by offering it as a cloud service. This doesn't have to be through Amazon's existing machine learning service; it could come from an officially supported machine image like the existing Deep Learning AMI that Amazon already sells. The former would be suitable for those who want an easily consumed product; the latter, for those who want total hands-on control.

MXNaaS?

Amazon also wants to become a major sponsor of MXNet's development. Vogels stated that Amazon will "contribute code and improved documentation as well as invest in the ecosystem around MXNet," and "partner with other organizations to further advance MXNet."

This plan includes another possibility: Creating custom hardware specifically designed to run MXNet at scale to provide a service not found anywhere else. In theory this could be done without making significant changes to MXNet, although Amazon could build an in-house version with enhancements specifically coupled to its own hardware.

It's not as if the publicly available open source version would magically lose its value if this happened. But cloud vendors recognize the importance of being able to provide an at-scale option out of a regular user's practical reach. As it is, MXNet fits elegantly into the machine-learning-as-a-service offerings Amazon has already brought to the table.