Google’s latest tool to help democratize VR and AR development is a new software development kit (SDK) that lets you easily integrate spatial audio.

Resonance Audio, as the SDK is called, is based off of the company’s VR Audio SDK, but is designed to work across mobile and desktop platforms and is compatible with AR and 360 video too. In Google’s words, the SDK is comprised of “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources”.

In other, simpler words, it allows for hundreds of spatial sound sources from within an experience without putting as much strain on your hardware. The company says this won’t compromise audio quality, even on mobile platforms.

Resonance Audio also gives developers more control over the sound in their apps. It allows them to control the direction of acoustic waves to create more realistic sounds. If you were listening to a guitar, for example, the developer could programme the sound to be louder in front of the instrument but be quieter when you walk behind it, rather than simply basing volume on distance alone. It also allows for rendering “near-field effects” that provide a better depiction of distance.

The SDK works with iOS, Android, Windows, Mac and Linux and is already integrated with engines like Unity and Unreal. This cross-platform support will mean developers won’t have to pay much concern to translating audio across the wide variety of headsets out there. The SDKs are available now on GitHub.

To help demonstrate the SDK’s effects, Google also has a new Rift and Vive release; a port of its Daydream Audio showcase, Audio Factory. It’s just a short piece showing you some of the difference capabilities of the SDK, but it’s free to download.

The release comes a week after Google revealed Poly, its new 3D asset hub for creations uploaded from apps like Blocks. Piece by piece the company’s plan for democratizing VR development is coming together.