We will need innovations both in hardware and software. The ability of AR systems to provide 3D visual information overlay will require low-power image recognition and data extraction solutions, at the lowest possible latency. With these solutions, AR glasses will for example be able to quickly visualize where you can find a bakery in the street you are walking in, and to display the types of bread that are available in the shop.

This will drive computation requirements and data rates far beyond what can be achieved today. To give an example, 3D high-resolution video image overlay may require data rates of about 1TB/s – 10TB/s. We will also need sensor fusion and machine learning tools, both on the glasses and in the cloud.

To minimize information overload for the user, we need self-learning systems that know which information is relevant for their user and what is not.

And at all levels of technology development, a dramatic increase in power efficiency will be required to guarantee a long lasting battery autonomy. Last but not least, the user will only accept this new technology if AR glasses can be made light weight, stylish, unobtrusive and comfortable, and provide a natural image to the eye.

Beyond AR glasses...

Looking further into the future, say 15-25 years from now, we will gradually move towards mobile holographic projection. With holo projection, everyone in the room will be able to visually experience 3D virtual objects, but without wearing glasses. These holographic projectors might be complemented with directed sound projection to actuate hearing, and with haptic feedback solutions to trigger touch.

And far beyond 2035, the next wave might be direct brain-to-computer interfaces.

Human senses will be triggered by directly stimulating certain areas of the brain. In a first phase, this could be done by non-invasive technologies such as EEG systems or ultrasound stimulation. In a next phase, we could think of brain implants. People already work hard to realize this vision, referring to e.g. Elon Musk’s company Neuralink. Without any doubt, brain-to-computer interfacing will create endless possibilities and useful applications, for example in a medical or educational context. But let’s leave it aside if people would welcome such a technology in their everyday life...

How is imec contributing to this future?

Imec is actively contributing to this future vision with the development of a broad range of technology building blocks.

On the actuator side, imec is developing semi-transparent AM(O)LED displays, and haptic feedback solutions to address touch. On the sensor side, various solutions are being developed, including radar, lidar, sonar, imagers, EEG systems and chemical sensors. More specifically, in 2018, imec achieved breakthroughs in radar technology, and developed solutions for high-speed snapscan and shortwave infrared range hyperspectral imaging.

Imec also works on algorithms and software for sensor fusion, 3D scene mapping, object detection and machine learning. In 2018, a breakthrough was announced in eye-tracking technology, developed to enable high-quality AR/VR experiences.

Imec and Holst Centre have also proposed a prototype of an EEG headset that can measure emotions and cognitive processes in the brain. Besides, imec contributes with activities in high-bandwidth communication, neuromorphic IC development and energy management. Find more info on displays, image sensors and sensor fusion, wireless communication, radar systems and data science on imec’s website.

Want to know more?