Medical imaging, face recognition, autonomous vehicles, and industrial automation were highlighted as key areas for embedded processing by Mark Papermaster, CTO of AMD, in a keynote presentation given today at the Embedded World trade fair in Nuremberg, Germany.

AMD was demonstrating technology from a number of its customers at the show, including an industrial inspection system from Danish firm Q Technology.

Q Technology builds camera solutions for food inspection, for tasks like grading potatoes and quantifying the amount of sediment in beer.

In his keynote address, Papermaster mentioned Siemens as one company at the forefront of factory automation using embedded sensing and embedded computing. Industry has not been a market where AMD chips have traditionally been found, so Papermaster’s speech recognises factories as a potential growth area for embedded processing, a lot of which could use vision.

Q Technology started using AMD’s Application Processing Units (APUs) in its potato inspection systems in 2012. Dr Ricardo Ribalda, lead firmware engineer at Q Technology, commented that the APUs were compatible with OpenCV to develop computer vision algorithms and Tensorflow for neural networks.

The AMD platform has also allowed Q Technology to speed up its product development. Ribalda said: ‘[New] products [based on AMD APUs] are not made in four years, they are made in three months.’

Q Technology has been doing R&D work with Carlsberg to develop a machine that can detect sediment in beer – it can count the particles in beer in an automated classification.

The company is now investigating using hyperspectral imaging - which Ribalda noted is becoming more affordable - to improve the inspection capabilities of its cameras for sorting potatoes.

Hyperspectral imaging is able to distinguish between plastic and organic material, as well as measure the water content of each potato. The technique, however, requires greater computing power.

Papermaster’s speech also looked at medical imaging, face recognition and autonomous driving, all of which use imaging to some extent, which shows just how pervasive vision technology is becoming in embedded computing solutions.