Smartphones are ideal devices for machine learning because of the sheer number of sensors they have. Combining data from multiple sensors at the same time can allow developers to make more accurate and quicker predictions inside their apps.

Today almost every smartphone comes with location sensors that provides user’s geolocation with high accuracy. By using geo-sensor data and knowledge about points of interest around a given location, we can dramatically improve speed and accuracy for a number of machine learning tasks including landmark detection and object detection. Beyond these core functions, this data also opens a world of innovative mobile experiences powered by machine learning.

Don’t believe me? Google does.

Turns out, the major players are already using this technology (a little bit). At the past two Google I/O events, Google Lens has been one of the products taking center stage. With the introduction of Google Lens inside Google Maps, locating nearby places has become easier and more intuitive.

Let’s say you’re on vacation and exploring a city you aren’t yet familiar with. You’re hungry, and you see a number of restaurants nearby. You want to know the kinds of food the restaurants you’re looking at serve, or what others have said about these places, or how costly they are, etc. You can do that right from Google Maps simply by just pointing your camera towards a given location. Since Google knows where you are and what you’re looking at, it can show you in the blink of an eye.

But Google’s attempt at this is just scratching the surface of what’s possible. Let’s code something better for Sundar to present at next Google I/O 😜