×

An emerging trend promises to bring the power of machine learning to mobile devices, opening the door to a plethora of valuable new applications.

Machine learning—the process by which computers can get better at performing tasks through exposure to data, rather than through explicit programming—requires massive computational power, the kind usually found in clusters of energy-guzzling, cloud-based computer servers outfitted with specialized processors. But recent developments may enable machine learning to be embedded into mobile devices, thus greatly expanding applications for its use.

Neural networks—computer models designed to mimic aspects of the human brain’s structure and function, with elements representing neurons and their interconnections—are an increasingly popular way of implementing machine learning. They are particularly well suited for performing perceptual tasks such as computer vision and speech recognition. Familiar examples of applications that employ neural networks for such tasks include Google’s voice search, and Facebook’s system for tagging people in photos. These systems run in the cloud on powerful servers, processing data such as digitized voice or photos that users upload.

Until recently, a typical smartphone lacked the power to perform such tasks without connecting to the cloud, except in limited ways. For instance, some mobile phone software can recognize a single face—the owner’s—in order to unlock the phone, or a small set of predetermined words such as “OK Google.” But offline support for increasingly powerful perception tasks is coming to mobile devices.

Firms are starting to outfit smartphones, drones, and cars with chips based on new designs that can run neural networks efficiently while consuming 90 percent less power than previous generations.¹ Research efforts at MIT and IBM suggest that we will soon see more chips on the market that excel at running neural networks at high speed, in small spaces, and at low power. Because of this, mobile devices are becoming increasingly adept at performing sophisticated feats that take advantage of neural networks—capabilities once reserved for powerful servers running in the cloud.

It is not only progress in hardware that is bringing machine learning to mobile devices. Tech vendors are also finding ways to create compact neural networks capable of running tasks such as speech recognition and language translation on conventional mobile phones with no connection to a server required. For instance, Google has introduced mobile language translation software using small neural networks optimized for smartphones that can perform well even offline.² And Google researchers published a paper describing an Internet-independent speech recognition system that performs well on a commercial mobile phone.

These developments should greatly expand the number of applications of perceptual computing coming to market—and not only on mobile phones. Mobile machine learning and perceptual computing will power a wide range of devices, from mobile sensors to phones, tablets, drones, cars, and new types of devices as yet unimagined, creating significant opportunities for business.

—by David Schatsky, senior manager, Deloitte LLP