



Music Visualizer

A while ago, I purchased some ambient LED lights from Ikea. They were just cool enough to inspire me to do this instead.

Using an LED Strip, a Teensy, and a bit of DSP code, I whipped up a pretty cool music-driven display.

All code is on Github.

Hardware

The hardware is pretty straightforward.

Adafruit LED Strip

Teensy 2.0

Power supply and misc. connectors from Adafruit

The Teensy is responsible for driving the LED strip. I was not satisfied with Adafruit's recommended driver code (it used malloc() and was rather verbose C++), so I wrote a very small and simple C library (in LEDStrip/Microcontroller/LPD8806/ ) to bit-bang the control signals to the LED strip.

The Teensy just reads a (LED Index, Magnitude) tuple over serial. Once the computer sends the magnitude of the final LED, the Teensy updates the LED Strip.

Software

The entire setup is running on a spare laptop running Ubuntu Studio.

I felt that ALSA was absolutely unusable for my purposes. After an afternoon's worth of fiddling with JACK (a substantial improvement over ALSA), various ALSA-JACK compatibility mechanisms (settling on snd-aloop), and JACK's convenient monitor interfaces, I had a working system for splitting audio between my speakers and a virtual "microphone". I could now process audio in realtime.

I use MPD to play audio.

DSP In Python

There are a number of non-Python languages I would prefer to do DSP in. Unfortunately, none of those have the same massive collection of reasonably well-written scientific and data processing libraries. Numpy is preeminent among such libraries; it has an excellent user-facing API and works very well. I use Numpy almost exclusively for DSP because it allows you to use fast, native code for numerically intensive work (like multiplying large matrices or taking FFTs).

I tend to adopt a stream processing methodology for DSP in Python. For example, in the diagram below, I depict taking a signal of some kind and squaring the values. This entails the following:

def square(data_stream): for array in data_stream: # We take an iterator of Numpy arrays yield array**2 # And we create a new iterator of the same

Composing these stream modifiers is easy. If I want to square a stream twice, I can do it like this:

stream2 = square(square(stream1))

Numpy is particularly useful here because we process values in batches of at least 32 elements (as 32 is the number of LEDs). Numpy allows you to perform element-wise arithmetic operations on the entire array, and it does them natively, as opposed to relatively slow list operations in python.

By composing Numpy primitives using this streaming technique, we can build reasonably concise and performant DSP pipelines.

Brightness

I read audio from the monitor using PyAudio. It's a reasonably functional cross-platform audio API.

The algorithm to map audio to LED magnitudes is as follows:

In English:

We divide the audio signal into frequency ranges. Each LED gets its own frequency range. The brightness of the LED depends on the volume of sounds in that range.

We add a bit of white noise to the signal; if there are any background noises in a quiet part of a song, the white noise will drown them out so they don't show up on the LEDs.

We scale each frequency range a bit, because the human ear doesn't hear all frequencies equally.

We scale the entire output so it fits within the range [0,1]. This way, we don't get any "clipping" and loud sounds always look brighter than quiet ones. We also do a bit of exponential falloff filtering here, so that quiet periods are darker than loud periods.

We square the output so that dim LEDs get dimmer and bright ones stay the same. This increases contrast.

We smooth the output.

See LEDStrip/Audio Processing/notes_scaled_nosaturation.py for code.

Color

Color is time-dependent, and generated independently from brightness.

All generated colors have the same magnitude (i.e. r^2 + g^2 + b^2 = 1 ).

The color generation algorithm is as follows, where n is the number of LEDs:

In English:

We have two sine waves of different frequencies. Their x axis is the LED strip.

We jiggle these two sine waves back and forth with time.

We add these sine waves up, and then map their sum into the range [0,1].

We use their sum as the hue parameter in an HSV color space.

We take the generated color, make it into an RGB color, and make sure r^2 + g^2 + b^2 = 1 .

See LEDStrip/Audio Processing/lavalamp_colors.py for code.

Tying It Together

At this point, we have a stream of colors for each LED and a stream of magnitudes for each LED. Combining them is straightforward; just take the entrywise product of the two arrays.

Send these values off to the Teensy (using Python's Serial library) and we're good to go.