We’re not far away from a new release of Windows 10, and with it plenty of new APIs for your applications. One big change is support for running trained machine learning models as part of Windows applications, taking advantage of local GPUs to accelerate machine learning applications.

Building a machine learning application can be a complex process. Training a model can require a lot of data, and a considerable amount of processing power. That’s fine if you’ve got access to a cloud platform and lots of bandwidth, but what if you want to take an existing model from GitHub and run it on a PC?

Trained machine learning models are an ideal tool for bringing the benefits of neural networks and deep learning to your applications. All you should need to do is hook up the appropriate interfaces, and they should run as part of your code. But with many machine learning frameworks and platforms, there’s a need for a common runtime that can use any of the models out there. That’s where the new Windows machine learning tools comes into play, offering Windows developers a platform to run existing machine learning models in their applications, taking advantage of a developing open standard for exchanging machine learning models.

Introducing ONNX for Windows

Microsoft is supporting ONNX, the Open Neural Network Exchange format, an open standard for sharing deep learning models between platforms and services. With ONNX you can build and train neural networks on cloud platforms or on dedicated machine learning systems, using well-known algorithms. Once trained, the algorithms can run on another framework, or with Windows machine learning as part of an application on a PC, or even an IoT device.

ONNX currently supports a range of machine learning frameworks, including Facebook’s popular Caffe 2, the Python-based PyTorch, and Microsoft’s own Cognitive Toolkit (formerly named CNTK). While there’s not direct support for Google’s TensorFlow, you can find unofficial connectors that let you export as ONNX, with an official import/export tool currently under development. ONNX offers its own runtimes and libraries, so your models can run on your hardware and take advantage of any accelerators you have.

Converting machine learning models with ONNX

If you’re using Microsoft Cognitive Toolkit on Azure to build and run machine learning models, you should already have access to ONNX, because it’s built into every version after 2.3.1, and it’ll soon be part of the Visual Studio Tools for AI. All you need to do to export a model is to choose ONNX as your export format, something you can do with a single line of Python or C# code. You don’t even need to write the export statement into your training application; it can even be a call from an Azure Notebook or a statement in an Azure machine learning pipeline.

Microsoft also provides its own tools to convert models from some other formats, like Apple’s Core ML, to ONNX. Microsoft’s WinMLTools is a Python package, so you can use it with existing machine learning tools, like Anaconda. By integrating with familiar machine learning development environments, you can mix cross-platform data science skills with Windows developers—using the same machine learning models that now run on popular mobile platforms, as well on Windows.

It’s not all plain sailing: There can be issues with working with Core ML models that deliver image outputs, because ONNX doesn’t directly support image types. However, Microsoft’s WinMLTools documentation walks you through the process of mapping an ONNX output tensor to a collection of image files.

Adding machine learning to Windows apps

Once exported, that model can run on Windows’ new machine learning platform. Once imported, trained models are ready to go and can work with new data without needing any additional training. Using one, say for image recognition, can give field workers new tools for identifying problems and finding solutions using their laptop’s or 2-in-1’s cameras. Where there are bandwidth issues, local machine learning algorithms can run over downloaded data from sensors or other monitoring tools, perhaps providing diagnostic tools for automotive repairs from on-vehicle sensors.

Building on the computational GPU support in DirectX 12, Window machine learning will run on most modern PC hardware. If a GPU isn’t available, it can also use CPU resources, though this will lose much of the performance gain you get from using a GPU as a parallel processing surface. There’s also the option of working with the AI core in Qualcomm ARM devices, and with Intel’s plug-in AI co-processors.

Support for Windows machine learning is currently available in preview releases of Visual Studio 15.7 and in the Windows Insider release of the Windows 10 SDK. Writing UWP code to handle machine learning is easy enough; you’ll import the ONNX model into a Visual Studio project. This will then automatically generate the appropriate interfaces and classes, ready for you integrate them into your application. If you’re using C++or C#, then mlgen, a tool in the Windows SDK, will generate wrapper classes for you.

Machine learning on the edge

Windows machine learning isn’t only for the traditional PC. As you move cloud applications to what Microsoft CEO Satya Nadella calls “the intelligent edge,” you’re moving them to many classes of device. Some will be full-size servers running in cell towers, some will be hub devices with similar specifications to branch office servers or desktop PCs, abd some will be the IoT devices themselves, using embedded processors.

AMD has recently launched embedded versions of its Epyc and Ryzen processors. Like the desktop and mobile Ryzen, the embedded version runs code on its built-in GPU. At the UK launch event, one developer was demonstrating a camera that used an embedded Ryzen to scan bottles of beer for inclusions; determining if a batch should be rejected. The embedded system’s machine learning model trained on more powerful systems, with a significant amount of image data. Once trained, it was exported and used to analyze the image stream from the camera.

The app itself was running on an embedded Linux, but it’s not hard to see a future embedded release of Windows, like Windows 10 IoT Enterprise, offering similar machine learning features on embedded Ryzen hardware. There’s also the option of support for ARM’s Project Trillium embedded machine learning processors, embedding hardware that can run ARM’s own neural networking software in low-power chipsets.

Bringing machine learning out of the cloud is an important step; it removes latency from applications that need to respond quickly and reduces the amount of data sent from remote sites to cloud servers. With Windows machine learning, you can use these techniques to quickly add complex algorithms to your own code, making your software more capable and more useful.