What Is the Web Audio API Capable of Doing?

Good question! Here are a couple of examples demonstrating the capabilities of the Web Audio API. Make sure you have your sound on.

Most of the basic use cases covered: https://webaudioapi.com/samples/

Complicated synthesizer example: https://tonejs.github.io/examples/#buses

The web audio API handles audio operation through an audio context. Everything starts with the audio context. With the audio context, you can hook up different audio nodes.

Audio nodes are linked by their inputs and outputs. The chain of inputs and outputs going through a node create a destination. The destination is the audio frequency we pick up with our ears.

Audio context schema

If you’re the type of person who wants to know all the tiny details, here’s a sweet link to get you started.

If you’re more into visual learning, here’s a great introduction talk about the Web Audio — check it out!

Steve Kinney: Building a musical instrument with the Web Audio API at the JSConf US 2015

One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source. This can then be used to create visualizations.

Show HN: Randomly generated metal riffs using Web Audio API and React

This article explains how and provides a couple of basic use cases. If you’re keen on learning the audio API in depth, here’s a great series.

Web Audio API | 01: Introduction to AudioContext

Here’s a free book about the Web Audio API by Boris Smus, an interaction engineer at Google.