A Sad Story

When I finished 118 (Intro to Programming), I was pretty convinced programming was awful, but just to be absolutely sure, I wanted to try programming audio things to see if that was any less awful. Unfortunately what I generally found as tutorials were prewritten projects which appeared to be 90% made up of asterisks and ampersands and after a while I gave up. The first real audio programming I did was with Will Pirkle’s RackAFX in Digital Audio II after taking both 218 and 318. RackAFX is without a doubt the best way to learn how to write audio plugins. So why not jump straight into RackAFX if you’re at a beginning programming level? The main issue is that you may not have gotten to the topic of dynamic memory yet, which means you would have to get very comfortable with seeing these guys *& without understanding what they do. If you’re feeling bold you could try it, but I’d recommend holding off on that till you’ve learned a bit more programming.

The Point er of This

The goal of this post is to introduce people at an introductory programming level to audio programming, while introducing as few new programming concepts as possible. When I started writing this I specifically had UM Music Engineers who’ve taken 118 in mind, but I’ve tried to make it more generally accessible to people who either don’t have an audio background or for people who took intro programming outside of UM. I have left notes in there for UM people to relate details here to things you learned 118.

The code this walks you through reads in a wave file, does some simple signal processing, and outputs a new wave file. The nice thing is it’s all done in a single, relatively small C++ file with no weird project settings or non-standard libraries. The goal was to have it only use concepts you already learn in an intro programming class, however there is one section where you’ll have to get exposed to something new, but it’s not that bad. If any part of it gets too heavy, just skip it and come back to it, no one part of this is worth getting stuck on.

Additionally, I think this may be helpful to people who jumped into RackAFX but never considered how the samples were magically getting to the processAudioFrame() function.

Spoiler Alert

Because this aims for simplicity, there are some inevitable limitations. The two big ones are that it can only handle 16-bit audio and it hard limits the length of file you can read in. It does that because correcting those problems properly involves dynamic memory I’m assuming you haven’t learned about yet.

Programming and Audio: The Very Basics of Digital Audio

I’ll try to make this quick since you’ve probably heard parts if not all of this before, but the following is all the digital audio theory you should need to understand the rest of this post. If all the details of this don’t click right now, don’t worry. After reading this I really just want you to understand in general what a sample is.

Sampling Rate

A continuous signal is one that can be measured at any point in time and find an exact value of the amplitude at that point.

Figure 1. A continuous signal.

In figure 1 is a signal, where you could theoretically look at any point in time and find a value for the amplitude of the signal at that time. Unfortunately this isn’t possible in the digital world. To do that, you would need to record the amplitude at an infinite number of points and you can’t record an infinite number of anything, and you certainly couldn’t store an infinite amount of anything.

Instead, an audio signal can be represented digitally as a series of samples. A sample is a measure of the amplitude of a signal at a given time. So if you have a series of samples over time, you can represent a signal.

Figure 2. A signal as a series of samples.

So how many samples do you need to take to represent a waveform? To accurately represent a waveform, the number of samples per second taken must be greater than twice the highest frequency contained in the signal. The number of samples taken per second is referred to as the sampling rate, and is measured in Hertz (Hz). The frequency that is exactly 1/2 the sampling rate, and is therefore the highest frequency you can accurately record, is referred to as the Nyquist Frequency.

SR > 2*MF

Nyquist Theorem. SR = sampling rate, and MF = maximum frequency.

So what’s the highest frequency someone may WANT to record? The theoretical limit of human hearing is 20kHz, so a sampling rate over 40kHz is needed. You end up needing a bit of cushion to filter out all frequencies above Nyquist, without losing frequencies less than 20kHz, so you don’t want to use exactly 40kHz. 44.1kHz and 48kHz are popular sampling rates.

Figure 3. A visualization of the filtering out of frequencies above the Nyquist Frequency.

If this didn’t make sense or you want more explanation and pretty animations on this, go here. In reality, for this tutorial its not necessary to understand anything beyond the fact that a digital audio signal is made up of samples and those samples are what we’re going to be dealing with.

Bit-depth

The second important topic to understanding samples is bit-depth. With sampling rate the issue was that it is impossible to take/store an infinite number of samples. Similarly, you don’t have an infinite number of values to represent each amplitude. Bit-depth tells you how many bits you have to represent the amplitude of the signal at each sample.

Figure 4. A continuous signal (red), sampled with a bit depth of 4 (samples in blue)

Freshly ripped from Wikipedia, above is a signal sampled with a bit-depth of 4. I know this not only because Wikipedia told me, but because there are 16 possible values for the amplitude (-8,7) and you need 4-bits to represent 16 values (2^4 = 16).

As you can see, the samples don’t perfectly capture the red, continuous signal. You can clearly see at the 4th sample that the signal was in between 6 and 7, but the sample was placed at 7, because you can’t have a sample with a value between 6 and 7 with this bit-depth. The process of samples being rounded to values not exactly equal to the continuous signal at that time is referred to as quantization. The difference between the rounded value, and the real value is known as quantization error.

Thus a higher bit depth can lead to more accuracy*, but also requires more space. For this exercise we’re using 16-bit audio which means we’ll have 2^16 different values to represent the amplitude at each sample.

*Note: It’s tempting to think that you’ll be really cool if you use highest possible sampling rate and bit depth, but this is very not true. A further discussion on that can be found here.

main() and The General Plan.

For the remainder of this post I’m going to walk through the code function by function. You can find the code in full here: link to the code.

main() lays out the plan for this code, which is the following:

Make a struct that will store the file info and audio samples. Read the data from the audio file into the struct. Do any signal processing you like. Output a new, manipulated file.

Obviously you’ll need to change the file paths to where your audio sample is located (Make sure your file is 16-bit).

Note: On rabbit, main() can be a void function because Murrell thinks having main() return an int is silly. Typically C++ compilers require main() to return an int so unless you’re doing this on rabbit, you have to use int main() and just have it return 0.

The Struct

A wave file has two main parts, a header telling you information about the audio contained in the file (sample rate, bit depth, etc), and then the audio samples. For in depth information on what’s in the header and how the samples are stored, click here. For now just read the section on the file header, I’ll link to that page again later when we get more in depth on samples.

The struct is just a coded version of what you’ll find in a wave file. For instance, the first piece of data in a wave header should contain letters “RIFF” or “RIFX”. Thus the first member, m_cChunkID[4] is a char array with space for 4 characters.

The two arrays at the bottom are where the samples will be held. The reason for having two arrays will be explained in the section on reading audio samples. So how do we determine the size of the arrays if we don’t know how many samples the file is going to contain until we read the file? Remember you can’t initialize an array with a variable as the size, which means you can’t read the file and then decide how big to make the array. Think about this for a few minutes and you’ll realize that you’ll need a way to make the array change size after initialization, which I’m assuming you don’t know how to do.

To get around this, we just set a global constant, gMax_samples, to a represent the max number of samples to read. If the number of samples in the file turns out to be greater than gMax_samples, then it only reads in gMax_samples.

Note 1: For those who took 118, this is the same solution as you used to make code that would accept a maze of different sizes in your last lab.

Note 2: the naming convention I’m using comes from a system called hungarian notation. I may not follow it exactly, but it’s still helpful. The basic idea is that each name starts with a lower case letter telling you the type of whatever you’re naming. For instance, “n” is used for int, so you instantly know m_nChunkSize is an int. Anything that starts with “m_” is a member of a struct/class.

Reading From a Wave File (the least fun part)

Once the struct is made for the audio file, its time to read the data from the file into the members of the struct. This function takes as parameters a SWaveFile object as a reference parameter, and a string holding the file location. The beginning of the function should look familiar, it just creates an ifstream object that will be used to read the file.

Now comes the one part of this that will not look familiar. In the past, when you’ve read from files, you probably used the >> operator to read strings, ints, chars, etc. from text files into the corresponding data types. However, a wave file is not a text file, it’s a binary file. You can think of a binary file as a stream of bytes. Unfortunately the >> does not work for reading in binary files. I found a decent explanation of why that is here.

The second image at this very useful link gives a good visual of how the data is stored in a wave file. In the image, each byte is represented as a 2-digit hexadecimal number. 2 hex digits = 1 byte (2^8 = 16^2 = 256). Here is a wave file of mine viewed in the same format, which I created in UNIX using a tool called “od“.

Figure 5. The far left column shows the position in the file. The stream of two digit hex values to the right of that are values in the wave file and should be read left to right. The first 44 bytes of data make up the file header and every number after that is part of a sample value. To help you follow along, very first value, 52, when converted from hex to decimal gives 82, the ascii symbol for “R”, the first letter of RIFF. The second value 49, converts to 73, the ascii value for “I”, and so on. The red rectangle covers the word “RIFF,” The blue rectangle covers the first left channel sample. The brownish rectangle is the first right channel sample. Notice they are each 2 bytes or 16 bits.

If the last paragraph didn’t make any sense, don’t worry about it. To read in values from a wave file (or any binary file), a method of ifstream called read() is needed. read() takes two parameters, the first being the location where you want the data to be put, the second being the number of bytes you want to put there. The tricky part is that the first parameter is expected to be a char*. This brings up the ugly topic of dynamic memory, but a super simplified explanation is that the first argument to read needs to be a buffer of chars. An array of chars fits this description, thus m_cChunkID can be read into like this:

ifAudioFile.read(wf.m_cChunkID, 4);

char[] is not the same thing as char* but for our purposes they work the same way. Don’t worry about the difference for now.

However for all other types such as ints, we have to trick read() into thinking its reading into a char* which can be done through casting. For this type of cast you need a particularly ugly thing called reinterpret_cast, which leaves us with the following:

ifAudioFile. read ( reinterpret_cast < char *>(&wf. m_nChunkSize ), sizeof ( int ));

All this does is is put 4 bytes into an int , wf.m_nChunkSize . 4 bytes since that’s the size of an int . That’s it. Note that the second parameter is equivalent to 4, but I used sizeof ( int ) to reinforce the fact that 4 bytes = the size of an int .

This is the only part of the whole code that you haven’t seen before but it’s nothing special. You’re just putting data into variables as you have before, the only differences is that the data is stored as a binary format, instead of nicely typed out numbers separated by white spaces in a text file. You have to use some different methods, but the result is the same.

Reading Audio Samples

At this point, the file header has been read, but not the audio samples. In between these two steps a few trivial things happen in the code such as checking to make sure the file header read was successful and limiting the number of samples we intend to read, remembering we can’t read more than our limit, gMax_samples. This would be a good time to review the information here about how the samples are stored in a wave file.

The goal here is to read the samples from the file into the array so that there’s a sample in each slot of the array. To do this we must have an array where each slot has a size equal to that of one sample. Since, we’ve agreed to use 16-bit samples for this exercise, we’ll need an array of shorts, since a short has a size of 2 bytes, which is equivalent to 16 bits.

The following reads the samples into the short buffer:

for ( int i = 0 ; i < wf. m_nNumSamples ; i++) ifAudioFile. read ( reinterpret_cast < char *>(&wf. m_sBuffer [i]), sizeof ( short ));

This reads through the file one sample at a time (2 bytes at a time), putting the current 16-bit sample into the current slot of the short array. The reinterpret_cast is just because once again read() is expecting a char* as the first parameter, but we’re being rebellious and reading into an array of shorts . Adding the & in front of wf.m_sBuffer[i] is also part of making read() happy, don’t worry about it for now. Finally, the sizeof ( short ) in the second parameters is just to emphasize that two bytes is the size of a short .

We now have a buffer full of 16-bit samples. Let’s think about those samples for a minute. An integer made up of 16 bits gives a range of 65536 values (2^16). Since we’re dealing with audio, and signals generally oscillate from positive to negative, it makes sense that the samples should be signed integers which have the range -32768 to 32767. Notice that when you have a signed data type, there is one more negative value than positive. This is because there are an even number of possible values (65536), and having the number 0 is very useful.

The problem with having integers on this range (-32768 to 32767 ) is that if you try to multiply two samples together, the result will likely go out of range. The solution to this is to convert the range to -1.0 to 1.o (Max is actually 0.999~ because there is one less positive value). You can multiply any two numbers on range -1.0 to 1.0 and never go out of range.

The second for loop in the code below does this conversion. It puts the samples into the float array, which is where the samples will stay while we do all signal processing. Doing processing with samples on the range -1.0 to 0.999~ is the audio standard.

We now have a buffer full of floats on range -1.0 to 0.999~.

It might be nice to have 20 and 24-bit audio, but since there is no standard C++ int type of that size (2.5 bytes and 3 bytes respectively), it would require some trickery to make that happen, which I wont go into now. Putting 24-bits directly into a 16 or 32-bit array will not work.

Signal Processing (The most fun part)

I made two functions here to do some simple effects.

The first is essentially a volume change function. It takes as a parameter a gain change in dB, converts that value to a linear one, and multiplies each sample by that linear value. If you’re a MuE, you should know the dB to linear conversion from Joe’s class. If you’re not, just take my word for it, or read this.

The second is just a reverse function. It works the same as reversing an array of anything.

Ideas for more effects: If you’ve taken digital audio then you’ve learned that low pass and high pass filters can be implemented using a single delay line. You may be surprised at just how easy that would be to implement, so give it a shot. What about a rudimentary limiter? Or a delay? None of these require any new programming concepts, just some DSP and some creativity. To create any effect, just create a function, pass in the SWaveFile object you want, and any additional parameters you’ll need for your effect.

Writing to a New Audio File

Much like outputting a text file, there isn’t much to making a writing to a new wave file. Pretty much just use ifstream’s write() function to dump the same data you read into the struct into the new file. However, there are a couple things to keep track of.

1) You can’t write the buffer of floats directly into the wave file, since it’s expecting values on the range -32768 to 32767. So have to convert/dump the samples back into the short buffer. This is done with the function below called convert_for_output().

2) The file size may have changed since we limited the number of samples that could be read in. This means that m_SubChunk1Size, and m_subChunk2Size need to be changed in the new file. I was lazy here and only updated m_subChunk2Size since it’s more important, but you can change m_SubChunk1Size too to be totally correct.

What now?

Assuming you’ve made it this far, you’ve now seen how to read in, modify, and write to a wave file. There are still plenty of things you could do to this code to make it more useful, and learn things along the way.

Experiment with new effects. This is the best thing you could do to expand your audio knowledge. Expand it to allow 20/24 bit input (probably should understand dynamic memory for this). Allow any amount of samples as input (You could do this pretty early in 218).

Let me know if you have any suggestions for this post.