Scientists at ETH Zurich and the University of Zurich have used machine learning methods to improve optoacoustic imaging. This relatively young medical imaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. However, quality of the rendered images is very dependent on the number and distribution of sensors used by the device: the more of them, the better the image quality. The new approach developed by the ETH researchers allows for substantial reduction of the number of sensors without giving up on the resulting image quality. This makes it possible to reduce the device cost, increase imaging speed or improve diagnosis.

Optoacoustics (see box) is similar in some respects to ultrasound imaging. In the latter, a probe sends ultrasonic waves into the body, which are reflected by the tissue. Sensors in the probe detect the returning sound waves and a picture of the inside of the body is subsequently generated. In optoacoustic imaging, very short laser pulses are instead sent into the tissue, where they are absorbed and converted into ultrasonic waves. Similarly to ultrasound imaging, the waves are detected and converted into images.

Correcting for image distortions

The team led by Daniel Razansky, Professor of Biomedical Imaging at the University of Zurich and ETH Zurich, searched for a way to enhance image quality of low-cost optoacoustic devices that possess only a small number of ultrasonic sensors.

To do this, they started off by using a self-developed high-end optoacoustic scanner having 512 sensors, which delivered superior-quality images. They had these pictures analysed by an artificial neural network, which was able to learn the features of the high-quality images.

Next, the researchers discarded the majority of the sensors, so that only 128 or 32 sensors remained, with a detrimental effect on the image quality. Due to the lack of data, distortions known as streak type artefacts appeared in the images. It turned out, however, that the previously trained neural network was able to largely correct for these distortions, thus bringing the image quality closer to the measurements obtained with all the 512 sensors.