Microsoft Releases NNI V1.3 for AutoML Algorithms and Training Synced Follow Jan 7 · 4 min read

Applying traditional machine learning methods to real world problems can be extremely time consuming. Automated machine learning (AutoML) aims to change that — making it easier to build and use ML models by running systematic processes on raw data and selecting models that pull the most relevant information from the data.

To help users design and tune machine learning models, neural network architectures or complex system parameters in an efficient and automatic way, in 2017 Microsoft Research began developing its Neural Network Intelligence (NNI) AutoML toolkit, open-sourcing v1.0 version in 2018.

NNI is a “lightweight but powerful” toolkit that can dispatch and run trial jobs generated by tuning algorithms to search for the best neural architecture and hyperparameters in environments such as Local Machine, Remote Servers, OpenPAI, Kubeflow, FrameworkController on K8S (AKS etc.) and other cloud options.

Microsoft recently released NNI v1.3 as well as a Chinese NNI version. The update provides more comprehensive support for the whole machine learning life cycle by applying AutoML algorithms to steps such as feature engineering, neural network architecture search (NAS), hyperparameter tuning, and model compression.

Microsoft recommends NNI for anyone who wants to try different AutoML algorithms in their training or models or run AutoML trial jobs in different environments to speed up search. The toolkit will also be appreciated by researchers and data scientists who want to easily implement and experiment on new AutoML algorithms, as well as ML Platform owners who want to support AutoML on their platform.

NNI’s GitHub page outlines the properties that make the toolkit so useful:

Easy-to-use : NNI can be easily installed through python pip — only several lines need to be added to the code in order to use NNI’s power. Users can use both commandline tool and WebUI to work with your experiments.

: NNI can be easily installed through python pip — only several lines need to be added to the code in order to use NNI’s power. Users can use both commandline tool and WebUI to work with your experiments. Scalability : Tuning hyperparameters or neural architecture often demands large amount of computation resource, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms. Hundreds of trials could run in parallel by depending on the capacity of your configured training platforms.

: Tuning hyperparameters or neural architecture often demands large amount of computation resource, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms. Hundreds of trials could run in parallel by depending on the capacity of your configured training platforms. Flexibility : Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users could also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications and models on them.

: Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users could also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications and models on them. Efficiency: The NNI team are constantly working on more efficient model tuning from both system level and algorithm level, for example, leveraging early feedback to speedup tuning procedure.

High-level NNI architecture

A basic NNI experiment starts when the tuner receives search space and generates configurations. These configurations are submitted to training platforms and their performances are reported back to the tuner so new configurations can be generated and submitted. For each experiment, users can follow an easy three-step process: Define search space, Update model codes, and Define Experiment.

In terms of capabilities, NNI provides both a Command Line Tool and a user friendly WebUI to manage training experiments. With the extensible API, users can customize their own AutoML algorithms and training services.

NNI also provides a set of build-in SOTA AutoML algorithms and out of box support for popular training platforms. The team is still adding new capabilities and welcomes outside contributions.

Current NNI capabilities

NNI v1.3 is compatible with the latest versions of Linux, MacOS and Windows. It also naturally supports hyperparameter tuning and neural network search for AI frameworks including PyTorch, Keras, TensorFlow, MXNet, and Caffe2, as well as libraries such as Scikit-learn, XGBoost, and LightGBM.

The open-sourced Neural Network Intelligence v1.3 is available for download on GitHub.