Artificial intelligence (AI) features will become a critical product differentiator for smartphone vendors that will help them to acquire new customers while retaining current users, according to Gartner, Inc. As the smartphone market shifts from selling technology products to delivering compelling and personalized experiences, AI solutions running on the smartphone will become an essential part of vendor roadmaps over the next two years.

Gartner predicts that by 2022, 80 percent of smartphones shipped will have on-device AI capabilities, up from 10 percent in 2017. On-device AI is currently limited to premium devices and provides better data protection and power management than full cloud-based AI, since data is processed and stored locally.

“With smartphones increasingly becoming a commodity device, vendors are looking for ways to differentiate their products,” said CK Lu, research director at Gartner. “Future AI capabilities will allow smartphones to learn, plan and solve problems for users. This isn’t just about making the smartphone smarter, but augmenting people by reducing their cognitive load. However, AI capabilities on smartphones are still in very early stages.”

“Over the next two years, most use cases will still exploit a single AI capability and technology,” said Roberta Cozza, research director at Gartner. “Going forward, smartphones will combine two or more AI capabilities and technologies to provide more advanced user experiences.”

Gartner has identified 10 high-impact uses for AI-powered smartphones to enable vendors to provide more value to their customers.

“Digital Me” sitting on the device

Smartphones will be an extension of the user, capable of recognizing them and predicting their next move. They will understand who you are, what you want, when you want it, how you want it done and execute tasks upon your authority.

“Your smartphone will track you throughout the day to learn, plan and solve problems for you,” said Angie Wang, principle research analyst at Gartner. “It will leverage its sensors, cameras and data to accomplish these tasks automatically. For example, in the connected home, it could order a vacuum bot to clean when the house is empty, or turn a rice cooker on 20 minutes before you arrive.”

User authentication

Password-based, simple authentication is becoming too complex and less effective, resulting in weak security, poor user experience, and a high cost of ownership. Security technology combined with machine learning, biometrics and user behavior will improve usability and self-service capabilities. For example, smartphones can capture and learn a user’s behavior, such as patterns when they walk, swipe, apply pressure to the phone, scroll and type, without the need for passwords or active authentications.

Emotion recognition

Emotion sensing systems and affective computing allow smartphones to detect, analyze, process and respond to people’s emotional states and moods. The proliferation of virtual personal assistants and other AI-based technology for conversational systems is driving the need to add emotional intelligence for better context and an enhanced service experience. Car manufacturers, for example, can use a smartphone’s front camera to understand a driver’s physical condition or gauge fatigue levels to increase safety.

Natural-language understanding

Continuous training and deep learning on smartphones will improve the accuracy of speech recognition, while better understanding the user’s specific intentions. For instance, when a user says “the weather is cold,” depending on the context, his or her real intention could be “please order a jacket online” or “please turn up the heat.” As an example, natural-language understanding could be used as a near real-time voice translator on smartphones when traveling abroad.

Augmented Reality (AR) and AI vision

With the release of iOS 11, Apple included an ARKit feature that provides new tools to developers to make adding AR to apps easier. Similarly, Google announced its ARCore AR developer tool for Android and plans to enable AR on about 100 million Android devices by the end of next year. Google expects almost every new Android phone will be AR-ready out of the box next year. One example of how AR can be used is in apps that help to collect user data and detect illnesses such as skin cancer or pancreatic cancer.

Device management

Machine learning will improve device performance and standby time. For example, with many sensors, smartphones can better understand and learn user’s behavior, such as when to use which app. The smartphone will be able to keep frequently used apps running in the background for quick re-launch, or to shut down unused apps to save memory and battery.

Personal profiling

Smartphones are able to collect data for behavioral and personal profiling. Users can receive protection and assistance dynamically, depending on the activity that is being carried out and the environments they are in (e.g., home, vehicle, office, or leisure activities). Service providers such as insurance companies can now focus on users, rather than the assets. For example, they will be able to adjust the car insurance rate based on driving behavior.

Content censorship/detection

Restricted content can be automatically detected. Objectionable images, videos or text can be flagged and various notification alarms can be enabled. Computer recognition software can detect any content that violates any laws or policies. For example, taking photos in high security facilities or storing highly classified data on company-paid smartphones will notify IT.

Personal photographing

Personal photographing includes smartphones that are able to automatically produce beautified photos based on a user’s individual aesthetic preferences. For example, there are different aesthetic preferences between the East and West — most Chinese people prefer a pale complexion, whereas consumers in the West tend to prefer tan skin tones.

Audio analytic

The smartphone’s microphone is able to continuously listen to real-world sounds. AI capability on device is able to tell those sounds, and instruct users or trigger events. For example, a smartphone hears a user snoring, then triggers the user’s wristband to encourage a change in sleeping positions.