Stay on Top of Emerging Technology Trends Get updates impacting your industry from our GigaOm Research Community

The wonks over at SRI, the lab responsible for technology behind Apple’s Siri, have come up with a pretty sweet way to apply image recognition to food. Dror Oren, executive director at SRI Ventures, says SRI has built a method that could let users snap a photo (or several) of their plates to get an approximate calorie count. Users of Meal Snap, which uses a camera and Amazon’s Mechanical Turk service to count calories, will find the idea familiar. (There’s also this futuristic device that uses laser spectroscopy to measure what’s in your food.)

Instead of apportioning the job out to Mechanical Turk or lasers, however, SRI’s project, internally dubbed Ceres, lets computers do the work of estimating calorie count. To do this, the SRI folk had to solve two problems. The first was an image recognition problem — namely, figuring out what’s on your plate. The second was a volume problem, assessing how much is on your plate.



Oren explains that because of hidden fats like oils and butter used in foods, you can’t get a precisely accurate count of the calories in all foods, but he stresses that for many people, getting a range — between 400 and 600 calories, for instance — is still helpful. The plan is also to use context clues about where a person is or what they have eaten in the past to figure out a more accurate calorie count. For example, if you’re taking a picture your burrito at Chipotle, Ceres could figure out the right questions to ask to approach a more accurate calorie count.

Norman Winarsky, the VP of SRI Ventures, expanded the concept to include heard conversations at the table that could be sent to the cloud and other information, perhaps pulled in from other apps like food logging services. The primary technology, though, is the image recognition. “If a human can recognize the food, so can the app,” said Oren.

I’m a bit skeptical, give the preponderance of food options like wheat bread that looks like white bread or chili made with turkey instead of ground beef, but I like the idea and the simplicity. The image processing and recognition is done in the cloud, and the algorithms were developed by many members of the same team that helped invent HD television.

The idea to try to recognize and track food came for the National Institute of Health, and the idea is to find partners who can parlay the image and volume recognition technology into mainstream consumer life. So it might be an app, a partnership with a company like The Orange Chef or even integration into cooking shows.

Winarsky says SRI is still looking for more partners and expects the technology to hit the mainstream in about a year. I’m just excited that we may soon have an easier way to count calories and the beginnings of a database for food.