You might be asking why we even need another API, separate from the audio element = everything into one step - don’t have to deal with separate steps of loading, decoding and playing; even get built-in scrubbing of the audio! Also very good for streaming long content.

Unfortunately, what you DON’T get is sample-accurate control over playback, and the audio element doesn’t scale to having loads of elements playing simultaneously. Particularly for gaming, where you will need to have lots of short sounds playing at precise times, you need to remove network and decoding latency as a factor.

Web Audio provides glitch-free playback of lots of overlapping sounds playing at very precise times. For short sounds - those you don’t mind loading into memory - it provides precise control over loading, decoding and playing of audio. It gives you very low latency - and of course, when you frag somebody in a game, you want to hear the explosion immediately.

It also provides a very powerful pipeline for routing, processing and effects, and that’s what we’re going to talk about in this presentation.

This doesn't mean we should throw away the audio element - it's the right tool in a lot of situations! For example, audio supports streaming, when you don’t want to have the entire audio buffer loaded in memory. As you’ll see later, we can integrate streaming audio elements into the Web Audio pipeline as well.