The life of machine learning developers gets easier with every passing week as the AI leaders such as Facebook, Google, Amazon and a few others keep on releasing their tools out to the public. Last week there has been significant releases, and we bring you all the hottest releases in this article:

Facebook AI, AWS Collaborated To Release New PyTorch Libraries

Looking for an easy way to deploy your PyTorch models to production without writing custom code? Now you can with TorchServe, a PyTorch model serving library developed jointly by AWS & Facebook! https://t.co/su4BDFP0B7 — Amazon Web Services (@awscloud) April 21, 2020

The collaboration of PyTorch and AWS was the major announcement last week. They have partnered to develop new libraries targeted at large-scale elastic and to accomplish fault-tolerant model training and high-performance PyTorch model deployment.

These libraries enable the community to efficiently productionize AI models at scale and push the state-of-the-art on model exploration as model architectures continue to increase in size and complexity:

TorchServe: a library that is an easy-to-use, open-source framework for deploying PyTorch models for high-performance inference.

TorchElastic: integrating Kubernetes and TorchElastic allows PyTorch developers to train machine learning models on a cluster of compute nodes that can dynamically change without disrupting the training job.

Quant-Noise

We are releasing code and sharing details for Quant-Noise, a new technique to enable extreme compression of state-of-the-art #NLP and computer vision models without significantly affecting performance. https://t.co/0iOidXmXsU pic.twitter.com/F2nFAknWO7 — Facebook AI (@facebookai) April 23, 2020

The AI team at Facebook research has open-sourced their new technique — Quant Noise, which enables extreme compression of models while maintaining high performance.

This method, the team claims, would reduce the memory footprint by 10x to 20x and significantly exceed the 4x compression with int8 currently available in both PyTorch and Tensorflow.

In the case of NLP models like Transformers, there are millions of parameters, and Quant-Noise shrinks the model significantly without degrading performance. This technique can also help bring cutting-edge AI to smartphones, tablets, and even IoT chipsets, with everything running entirely on-device to avoid disruptive errors and lag. This will enable devices used by many millions of people around the world to run new virtual and augmented reality experiences, more intelligent assistants, and other new products and experiences.

AppFlow

Amazon has announced a new service called Amazon AppFlow that will allow users to automate the data flow between AWS services and SaaS applications such as Salesforce, Zendesk, and ServiceNow. SaaS application administrators, business analysts, and BI specialists can quickly implement most of the integrations they need without waiting for months for the IT team to finish integration projects.

Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error — AppFlow is designed to tackle this.

Read more.

PyTorch 1.5 Released

Last week, PyTorch announced the availability of PyTorch 1.5, along with new and updated libraries. This release includes significant update to the C++ frontend, ‘channels last’ memory format for computer vision models and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind.

Read more.

NVIDIA and King’s College London Announce MONAI

MONAI is a framework that is designed to bridge the gap between AI and medical researchers. It was developed by NVIDIA in collaboration with King’s College London and have open-sourced this framework last week.

MONAI builds on best practices from existing tools, including NVIDIA Clara, NiftyNet, DLTK and DeepNeuro.

The framework is user-friendly, delivers reproducible results and is domain-optimized for the demands of healthcare data. It is also equipped to handle the unique formats, resolutions and specialized meta-information of medical images.

If you loved this story, do join our Telegram Community.



Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.