What is your background and how did you get into broadcast sound?

I have been in live broadcast television since I was 17 and just about to graduate high school. I knew I wanted to be involved in audio, and at first I wanted to be in music. I wanted to tour with a band or work in a studio. After talking with some locals in the industry, they all basically steered me towards television. I started off volunteering with a local production company in Edmonton, and the owner Dave Benson recommended me as a Utility for CFL (Canadian Football). I started working freelance with Dome Productions in 2001 as a teenager and worked my way up to being an A1 around 2010.

What was it about live broadcast sound that attracted you and what keeps you interested?

I really enjoy the variety and challenges that live broadcast sound affords. Looking back, I’m sure I would have been bored working in a studio or with the same band/show every night.

Live television feels like the wild west of audio sometimes, in that some days we’re rushing just to get a show to air and having time to make it sound amazing is second to just getting it heard!

I like that intensity and rush. I need that stress and time crunch in order to perform as best as I can. While music or studio work definitely has its own challenges, I feel live broadcast sound is the thrill I need to stay focused.

When did you join Dome Productions and what was your initial role?

I started freelancing with Dome in 2001, and then was hired as a Staff A1 in 2013. While freelancing I started as a utility pulling cable, moving to an A2 position, and then A1. Once hired as an employee, my role was to work on some of Dome’s biggest shows as an audio mixer, comms person, and/or guarantor.

Can you give us a bit of background about Dome and its main work?

Dome Productions is one of North America’s leading production facilities providers, offering mobile production facilities, transmission services, studio facilities and full turnkey host broadcast services. Traditionally, most of our work has come from live sporting events, but lately the world of eSports and other entertainment-style productions have been serviced by Dome Productions.

How has that role evolved?

As I’ve become more proficient with new equipment, I’ve found myself on larger shows that require a good knowledge in the world of audio-over-IP and large-scale routing. I went from mainly mixing sports to building and engineering larger setups for entertainment style shows and especially eSports. We’re involved in so much more networking now as audio engineers. It used to be analog audio and party lines, but now we’re dealing with IP addresses and VLANs. It’s adopting more aspects of a network engineer’s role.

When did you first start working with Calrec equipment?

I had the amazing opportunity to mix a morning news show for several years, and it was a great training ground for the world of live television mixing. While I was there the station purchased a new Calrec Omega console and I had the chance to be involved in the setup and integration of the console. During my long morning shifts, I touched every button, investigated every menu setting and option, and learned the console as best I could. Being comfortable with it became a huge advantage when I started doing freelance A1 work. Since then I’ve made a point of reading manuals and exploring every screen and menu when I’m working on new equipment.

What Calrec consoles and related technologies have you used over the years?

I think I’ve had the chance to work on most of Calrec’s consoles over the years. I’ve used the analog S2, and it’s actually still used in one of our trucks! I’ve used the Omega, Sigma, Alpha, Artemis, Apollo, and Brio. I’ve used many Hydra stage boxes and have become familiar with the setup software as well as the Waves Soundgrid and Dante cards that Calrec offers.

What was your role in the recent ‘AT&T Super Saturday Night with Lady Gaga’ production?

I was hired as the Audio Guarantor for the Saturday Night with Lady Gaga production. We were using Dome Productions’ newest 4K truck called “Vista”, outfitted with the Calrec Apollo console.

It was my job to build and engineer all the technical aspects of the audio components of the show. We had two A1s mixing the event, and I was there to basically guarantee that everything they needed to use would work. With such a high channel count, multiple multi-track record machines, and a Waves Soundgrid server running multiple racks of plugins, I was responsible for getting everything setup, routed, and working.

For this event, we received three 56-channel MADI streams from the FOH DiGiCo console over coax, 32 analogue stems via a Calrec Hydra2 48×16 stage box over fibre, and a 2-channel board mix via analogue DT-12. The total count from FOH was 202 channels of audio. We added our own audience response mics to complement the ones provided by FOH, and had microphones on three cameras, two EVS machines for the open and close videos, and a stereo return feed from Twitter for monitoring. We landed 128 channels of MADI and 32 stem channels from the Hydra2 Stage Box in the Calrec.

We built our own groups for mixing/processing and using direct outs of everything we sent three 64-channel MADI streams into an M.1K2 MADI Router to be distributed to the main and backup multi-track recorders. Multi-track recordings were done with two MacBook Pro laptops running Reaper and two RME MadiFace XT interfaces. We recorded 164 channels into each as a main and backup record. All the individual mics, the FOH stems, the stems we created in the truck, our two mixes, and timecode from the truck was recorded. All of these channels were available to play back instantly for virtual soundchecks.

What were the workflow challenges of the Lady Gaga production and how did the Apollo console help you overcome them?

One interesting challenge we had is that there were multiple shows happening in this venue over the few days we were there. We arrived, setup and rehearsed Lady Gaga’s show, and then they packed up to allow a full-on boxing show to take over the venue and the truck! Then Lady Gaga’s show moved back in, and we went back to that mode. That back and forth was challenging in that we were sharing a lot of the resources of the truck and console. Having so much DSP available was helpful, plus being able to save/load port lists. We could give outputs custom nicknames for boxing, and then load a file to change all the outputs to have our custom Lady Gaga nicknames. Obviously the show memories were helpful being able to save and load different setups for the various shows happening.

Speaking of DSP, while having a conversation with one of the A1s in the morning on our way to the venue, he was concerned that we didn’t have any extra groups available. We were using roughly 10, and he wanted to build a few more. I had the pleasure of telling him the Apollo could handle up to 48 groups if we wanted, so having that high DSP count was great.

What are the standout aspects of the Calrec Apollo console that you used?

There were a few standout aspects of the Apollo that we found helpful. The high fader count was very helpful, especially with two people mixing. Having 144 faders available to touch and move meant there was less time trying to swap between layers. We were able to use the “User Split” feature to allow them to work without interrupting each other, and I was able to use the PC touchscreen to do any engineering changes. I liked that the PC screen allows you to follow what’s selected on the console or work independently. We had three users all working separately, selecting channels and adjusting settings without interfering with each other.

The ease in which you can move/swap/clone faders on a Calrec is amazing, especially using the “fader setup” screen. It was very intuitive and helpful for us to be able to drag and drop faders around. In other consoles, you might need to make sure the channel format matches or something convoluted like that, but with the Calrec it was a simple drag and drop and easy to use.

We built ourselves different operating layers using the layer split feature. We had a layer for line checking all the inputs, a layer for any technical/engineering channels, and a layer for the actual mix.

Finally, the “replay” feature was a huge help. With it, we can set Input 1 of each channel to be the live input and Input 2 to be the outputs from the multi-track recorders. With one button, we can swap everything from live channels to playback channels and do virtual soundchecks and mix adjustments using the recorded material.

What are the workflow advantages of Calrec technology in general?

It’s not so much a workflow advantage, but Calrec consoles tend to be very clean and transparent in a sonic sense. There is no colouring or character to the preamps, which I find to be a good characteristic. On this show, we used a lot of plugins and all the colour and character of the mix came from that, so it was nice knowing we were dealing with a totally clean console that didn’t interfere with the sonic changes we made in Waves. I’ve also been very surprised at the headroom the consoles have and I don’t think I’ve ever heard the Calrec clip.

In terms of workflow, the routing capability is huge. Having four different direct outs available per channel, two available inputs per channel, plus all the tracks/auxes/groups meant we could send anything anywhere. Also, being able to send tone or TB to individual discrete direct outs meant we could slate/tone out paths without interrupting other paths. It was all total flexibility.

How has your job changed over time, and where do you think it is going?

The job, at least how I experience it, has become more and more about networking and engineering rather than mixing. Sure, we’re still mixing every show but there’s so much more time involved in engineering, routing, and networking now. In the past, we’d have a handful of analogue DT-12s that ran inside the audio room, a whole bunch of analogue patch cables in the wall, and we’d have a show. Now we have stacks of network switches, bundles of fibre, remote stage boxes, and four different computer screens handling all the software.

The emergence of Dante, OMNEO, and other IP-based audio transport technologies has made the job both easier and harder.

The ability to send such a high channel count over ethernet to multiple devices in a network is great. I recall an eSports event I did in the past where we had Dante shared between the FOH console, the Calrec, and the intercom system. The director wanted a stage announce key on his intercom to hit the PA, and it was literally one mouse click for me to build that. Before, you’d have to take a port output, patch it into the console, send it down a stage box or a DT-12 then patch it into the FOH console.

With networked audio, it is one mouse click to route the director’s stage announce key to an input on the FOH console; the FOH mixer was able to raise the fader and have stage announce within seven seconds!

As an audio nerd, that kind of power is so cool, although alongside that power comes the need to understand and troubleshoot networked audio. It’s nice when it works, but it’s not like analogue troubleshooting. I can’t stick a Qbox on the end of an RJ45 and listen for tone. Now I’m pinging devices and checking for IP conflicts.

What changes have you seen in the way audio is mixed and delivered to audiences?

The biggest change has been content delivery over the internet. We’re not on “TV” anymore, we’re on the internet. The Lady Gaga show was streamed on Twitter and Periscope, it wasn’t on any conventional TV channels.

ESports events are streamed on Twitch or YouTube and people are watching these shows on phones and computers, not on television sets. Knowing this, we have to carefully consider how these shows sound on a phone and mix appropriately. It’s funny to think we had 202 channels of audio being mixed down to a tiny phone speaker. The Lady Gaga show was mixed pretty loudly, much louder than would have been allowed on TV. We weren’t sure what type of processing was happening down the line and were constantly checking the stream to see how it was translating.

Our mix stayed pretty accurate, and we only saw a few DB in volume increase. Many of these events have multichannel transmission audio, with many different mixes and microphones being ISO’d down transmission. We’ll have international mixes, FX mixes, clean mixes, dirty mixes, etc. with all of these being sent down multiple transmission paths. The eSports event I’m touring with right now has 12 multichannel transmission paths, all over the internet. The Calrec’s huge routing capability makes this a pretty easy task though.

Where do you see audio programming going in the next five years?

I mainly see a continuation in what we’ve been doing. I see things moving more and more towards IP-based technologies, digital audio, Dante booth kits, and audio and intercom over ethernet and fibre rather than analogue. In the live sports world, Canada has been behind the US in terms of adopting these new IP and digital transport technologies. We’ve been using them in the trucks, and between trucks for years, but our stadiums and arenas are still wired for analogue audio. I hope to see that change over the next five years, eventually allowing us to build our booth audio with a couple strands of fibre.