We'll be exploring the role of Immersive Audio at the IEEE Games Entertainment & Media conference in Galway, Ireland 15th-17th August. Our lead keynote on Immersive Audio is Martin Walsh of DTS Inc., a core business unit of Xperi corporation. He is VP for Research & Development in interactive audio processing and leads the development of all technologies associated with the interactive 3D audio research program, which has a strong focus on gaming, virtual and mixed reality.

Mixed Reality (MR) brings with it the promise of a future where synthetic experiences are perceived to coexist with the reality of our physical world.

While the technological consequences of this future are often discussed and researched for optical and imaging technologies, they are less well understood for the equivalent audio components of those experiences.

In contrast to a visual stimulus where cognitive perception from the human visual system is strongly focussed in the direction of gaze, our hearing is sensitive to spatial and environmental contexts.

This leads to some very interesting challenges in adapting multi-channel digital audio to provide unique and believable experiences in complex real-world consumer and automotive environments.

In truth, the accurate and realistic rendering of audio fields is the single most important factor in providing a convincing and compelling new experience for augmented and mixed reality platforms.

True-to-life audio synthesis and reproduction is a core component of the cues that lead to the suspension-of-disbelief that necessary for a fully immersive MR experience. This brings new challenges and opportunities for interactive audio synthesis and rendering algorithms along several application categories, including gaming, entertainment and social interaction.

In this talk several typical MR application scenarios are explained along with the challenges these create for a truly believable and immersive sonic experience.

Most of these challenges do not have an equivalence in today's enclosed worlds of VR and gaming.

The latest research in acoustics, audio synthesis and machine learning is also reviewed and will provide potential solutions to the challenges posed by emerging MR use cases. The end goal is to create seamless and truly immersive, visual and aural blending between what is real and what is perceived to be real.

Keynote Bio: Martin Walsh received a PhD in spatial audio from Trinity College Dublin in 1996. From there he joined Crystal River Engineering in California, where he co-developed one of the first industry standard positional 3D audio APIs for VR and gaming. He later joined Creative Labs Advanced Technology Center as an audio research manager, where he worked on many of the company's 3D audio technologies for soundcards and headphones. In 2008 Dr. Walsh joined DTS where he now holds the position of VP, R&D for interactive audio processing. His duties include leading development of all technologies associated with the interactive 3D audio program, with a particular focus on gaming, virtual and mixed reality.

About DTS: DTS is the company behind a range of immersive, object-based audio formats and high-quality audio codecs. Object-based audio soundtracks provide enhanced surround imaging compared with current 5.1 and 7.1 multi-channel audio formats found on blue-ray disks and in use for TV shows and movies today.



