Netflix' excellent limited series Maniac takes place in a alternate modern-day world - a world like what people of the '80s thought 2018 would look like. It sounds different too, and this in-depth interview gets you the story on how Emmy-winning supervising sound editor/sound designer Mariusz Glabinski and his sound team carried out clever, adventurous sonic experiments to create a world that sounds archaically digital, analog and mechanical. Interview by Jennifer Walden. Note: Contains spolers Please share:

Interview by Jennifer Walden. Note: Contains spolers

“Have you ever had ‘the blues’? Or, maybe you’re having trouble with grinding your teeth… Do you suffer from anxiety, PTSD, body dysmorphia, hypo-active sexual desire disorder? It doesn’t matter. We can fix you,” says Dr. James K. Mantleray (Justin Theroux) and Dr. Robert Muramoto (Rome Kanda) — heads of Neberdine Pharmaceutical Biotech ULP Drug Trial in the new Netflix series Maniac, directed by Cary Fukunaga.

The speculative fiction series is set in an alternate modern day world where 1980’s computing tech hasn’t grown up to be as sophisticated as our tech in reality. For example, instead of internet pop-ups, there are “Ad Buddy” representatives who read advertisements to people in exchange for payment for things like subway rides, meals, and packaged goods. It’s a reality similar to our own, but twisted in a way.

That concept is steeped into every layer of the series, including the sound. Emmy-winning supervising sound editor/sound designer Mariusz Glabinski and his sound team created a world that sounds archaically digital, analog and mechanical. A world of tape loops and Speak & Spell-style automated voices.

The story follows the lives of two participants of an experimental drug trial that seeks to cure mental illness. There’s Owen (Jonah Hill) — who is a diagnosed schizophrenic — and Annie (Emma Stone), who has been coping with the death of a loved one by using the “A” pill of the ULP drug study in a recreational manner. The supercomputer, known as GRTA — which is supposed to be monitoring/guiding the mental trips of the drug study participants — is dealing with loss, too, and so goes a little crazy during the experiment. Wires get crossed and the pill experiences of Owen and Annie intermingle. Their mental trips together aren’t reality, but the connection it creates between them changes their realities in the end.

Here, Glabinski talks about creating the sound of this alternate New York City, designing the Neberdine Pharmaceutical Biotech lab and the voice and processing sounds of the supercomputer GRTA, plus shares sonic details for some of the wild mental trips the participants experience.





What were series creator Cary Fukunaga’s goals for sound on Maniac? How did he want to use sound to help tell this story?

Mariusz Glabinski (MG): I first met Cary while working on True Detective in 2014. He’s very particular about sound. Sound is very important to him. From working on True Detective, I knew that he was going to be very demanding in terms of sound on Maniac.

From working on True Detective, I knew that he was going to be very demanding in terms of sound on Maniac.

But in talking to him about Maniac, I realized this would be a completely different beast, where each part would be very specific and need lots of variations and experimentation to find the right sound.

He wanted to use sound to create a world that was contemporary but had different innovations and technologies. It’s an alternate present, like what present day would be if the microprocessor was never invented. So we are stuck with some old technology, like computers from the ‘80s and ‘90s, but they’re more advanced in look and sound.



The world of Maniac has no cell phones. The Internet probably doesn’t exist. Everything is more analog. Cary describes this world as being what the people of the ‘80s thought 2018 would look like. That’s why it’s like a futuristic ‘80s technology. So, we’re stuck with the old technology but with new advancements added to it.

He wanted each specific sound to be distinctive and recognizable, but not take center stage — to be a part of life for the characters and not too science fiction. He wanted familiar, everyday sounds but with a little twist, to make them a little unusual at times.

I tried to apply this idea to almost everything, taking everyday recordings and adding some more extreme element or twisting them slightly through processing.

My editing room on the show was right next to the picture department at Light Iron NY, so picture editors Pete Beaudreau and Tim Streeto were able to just walk to my room or I could walk to theirs to watch some new VFX or a new scene they just completed, or see some change in picture that would reflect on sound.

Usually you just spot a film or a show and then you work on it until before the mix; you may review some scenes with the director and then you are mixing. This wasn’t the case!

Cary was always there editing as well, so we could bounce many sound ideas back and forth. They were constantly reshaping and changing as we were reviewing scenes and I could have instant feedback. It was a helpful process for everybody. I was very fortunate to be able to work so close with all the creators on a daily basis. Usually you just spot a film or a show and then you work on it until before the mix; you may review some scenes with the director and then you are mixing. This wasn’t the case! And I wish more projects could really take advantage of this model of working.

This was especially helpful since working with Cary is often a big improvisation. He may like and approve some sounds that you’ve talked about during several spotting sessions, but then as the story or picture is reshaping, he often will ask for something different and go in a completely different direction. So you know that you always have to be ready for the unexpected and basically have sound coverage for everything with a few variations for each element.

Tell me more about Maniac’s alternate universe. There are old computers, and robots that scoop dog poop off the sidewalks. There are people who work as an ‘Ad Buddy’ and ‘Friend Proxy.’ How did you help to bring these ideas to life through sound?

MG: In the world of Maniac, Cary saw the ‘Ad Buddy’ as the analog version of what’s going on with the Internet, with all these pop up advertisements.

In New York City, where Owen and Annie live, I didn’t use any New York City street ambience. Following the idea of using familiar sounds with a little twist, I used traffic and street sounds from Moscow and Tokyo.

I’m sure that probably only New Yorkers may pick up on the subway and street sounds being off, but again, it’s this idea of having a familiar sound that’s not quite right.

For the subway station — when Owen is walking with the Ad Buddy and riding in the subway car, we used Paris and Tokyo subway sounds. I’m sure that probably only New Yorkers may pick up on the subway and street sounds being off, but again, it’s this idea of having a familiar sound that’s not quite right. It’s still a subway sound but it’s not the right subway sound.

For the car sounds, I tried to use real car engines, but the engine you hear isn’t the right sound for the car you see. Maybe the engine is too beefed-up for the little car you’re seeing. To that, I added some electronic and synth elements, or sounds that I recorded from a circuit bent Speak & Read. I would run those through a Doppler plug-in and add them to the car-bys.



Need specific sound effects? Try a search below:



So for example, when you see Owen walking with his father on the street — that’s the first time we see the robotic pooper-scooper (aka, PooBots) — we have the sounds of cars passing by and there are those slight changes and alterations we made. It’s a car engine but maybe it’s some kind of new engine that was created in this alternate world.

Owen is living on Roosevelt Island in NY, which is a very narrow, small and distinctive island sitting right in the shadow of Manhattan. His apartment is tiny and claustrophobic. You hear neighbors through the walls and big neon signs right outside his window. We used all those elements to make his apartment even more uncomfortable, claustrophobic, and depressing.

Annie is living in Chinatown in Manhattan, which is loud and chaotic. There are advertisements everywhere on the streets, on the passing trucks, and from the store fronts. Cary wanted to hear multiple languages, most distinctively Japanese and Russian throughout the city and Chinese around Annie’s apartment.

I love the subway ticket machine. It sounds like an old Speak & Spell! Is that how you created that automated voice sound?

MG: That was the first sound that I did on the show. They contacted me two days before they started shooting the scene in a Queens (New York City) subway station. Cary likes to play as many sounds as possible on location as he shoots, knowing that he may replace some of them later. But still, the idea is to have the sound with the natural reverb from the actual location so the actors can react to it. So they asked me to create a few versions of the subway ticket machine. I created about seven, which I sent to them before the shoot. Out of that, they chose three and we did some more variations on those.

Cary likes to play as many sounds as possible on location as he shoots, knowing that he may replace some of them later

I have a whole storage room full of old ‘junky’ equipment and computers, which turned out to be very useful for this project. I have an old Atari 800XL computer from the ‘80s, which has an archaic sound synthesis program. I used that to do some of the automated voice for the subway ticket machine and the PooBots. I did some processing on it with the iZotope VocalSynth and the Vocoder in Native Instruments Razor. I then processed some of them using Audio Ease’s Speakerphone.



But you’re right. The one we ended up using for the subway ticket machine was a version from the Speak & Spell recording. They were actually triggering this on location, to the actor’s actions on the set.

Then the PooBots’ voices were from that Atari software with some applied processing. The PooBots’ mechanical sounds were redesigned a few times. First, we tried servos and mechanical sounds that we recorded using contact mics at the Hall of Science in New York — which is a large hall with a vast amount of wonderful science experiments. Also our Foley department — Jay Peck and Igor Nicolic from Stepping Stone Foley — did some great recordings of different mechanical parts. We had that version in the cut for some time but then Cary asked for a new idea for the PooBots, without any servos or motors inside, so they’re more “nuclear” sounding contraptions.

The beep is from Light Iron’s kitchen dishwasher, which was always beeping at the end of a cycle and driving me crazy because the sound was so high frequency and it carried through the hallways

For that, I experimented with some synthesizer sounds as well as contact mic recordings of different kitchen appliances. I think I used coffee and espresso makers and dishwasher sounds, plus electromagnetic recordings we did from microwaves, TV sets and monitors. The beep is from Light Iron’s kitchen dishwasher, which was always beeping at the end of a cycle and driving me crazy because the sound was so high frequency and it carried through the hallways. One day I just stuck a contact mic to it and recorded a few sounds, including the beep. In the end, we ended up using a little of both ideas — servos and so called “nuclear” sounds — for the PooBots.

Overall, it was a lot of experimenting and trying to get as many interesting ideas as possible, even without knowing what I would use those for at first. I was building a library and choosing sounds that fit. To me, this is the most fun part of the process.

Inside the NPB lab (Neberdine Pharmaceutical Biotech), how did you want the facility to feel and how did you create the sounds for this environment?

MG: My first step for those various rooms was to just cut background sounds using different low energy drones — some of them pulsating and some of them having a kind of breathing quality. This was stuff that I either used from library or had recordings of from previous projects, or created using synths, like Omnisphere (by Spectrasonic), Absynth (by Native Instruments), or Dust (by SoundMorph).

I would layer them in Native Instruments Kontakt and experiment with playing those tones on the keyboard in real time to picture. So each room would have a more distinctive sound then just using room tones.

For the reception area and hallway above ground, I recorded the actual location. It was shot in the main hall of Queens College (which of course they changed the look of for the show).

We recorded a lot of voices in many languages in different environments. Cary likes the idea that the announcements we hear, the advertisements on the street, and other sounds are being played back from tape loops. So there are layers of distant announcements in Japanese playing in that big lobby. That plays way in the background and creates a windy and humming feel. They are welcoming you or describing what is going to happen but it is part of the atmosphere. Then on top of that, through a PA system, you have ADR lines from the actress who is calling the participants’ names.

This is always an interesting task for a sound designer: how do you create ‘silence’ in a film?

In the intake test room, there’s a modified polygraph machine. The idea of the actual machine was changing a lot during the project, from the slide projector-type to a video display and more electronic sounding. In the end, we ended up with the original idea of that slide projector and we went with the analog sounding machine with pins and some electromagnetic elements added to it.

When we go underground to the lab with the GRTA main computer, it is a hybrid of analog and 8-bit sounds with some mechanical parts for all the blinking lights.

That computer was actually built and those lights were really blinking on the set. It wasn’t something that was added as VFX in post. There was a guy who programmed those lights to blink the way they blink because, again, Cary likes to have practical set pieces.

The blinking sound of the lights was something that we had recorded at the Hall of Science.



For the anechoic chamber — where the participants ingest the pill and go through their experiences — that (as the name implies) is a very quiet, dead sounding place. This is always an interesting task for a sound designer: how do you create ‘silence’ in a film? I used some low-end hums and drones playing very low. Then during the mix, we removed nearly all the Foley that we recorded for that room, like all the footsteps for example, leaving only hints of a rustle track. We wanted to make it feel like the room is swallowing the characters together with the sounds. The brainwaves heat-pads that are locked-in on each participant have heating, radiation, and microwave elements to it. Again, we were able to use those electromagnetic recordings that we did with a contact mic.

The common room, where the participants sleep, eat, and hang out, that has a huge diorama simulating the outdoors. It emits different sounds depending on the time of day. The direction from Cary was to have it feel like those sounds were coming from a tape loop and being played over a tiny speaker. So it sounds kind of crappy and not very high-tech.

There are one or two scenes where you actually see the diorama change to a different time of day. For that I chose some obnoxious, busy bird sound recordings and crudely cut them into short parts and looped them so that you can actually hear the loop, almost like a skipping record. I played that back through a tiny speaker in the booth and re-recorded it. I added some static and crackling sounds to make it sound more deteriorated. During the mix, we ran it through the Audio Ease Indoor plug-in.

I also added the sound of a jukebox mechanism switching records for the start of each tape loop. So it sounds like the tape is switching from night to day. There’s a little ramping up sound and slowing down sound during the tape changes. It’s subtle but if you pay attention you’ll hear it. That machine was fun to do.

Then up on the 77th floor in “Yoda’s” room — where Dr. Azumi Fujita (Sonoya Mizuno) goes to talk with ‘Yoda’ through the TV — I used some wind drones for that space since the room is very high in the sky. We were able to use some drones made up of winds and some very slowed and pitched down wind chimes and Japanese bells being struck slightly, which again are played as melodic backgrounds.

Originally, we also had some sounds of futuristic drones and ships flying by even though there was never any VFX of that planned, but we ended up not using much of those. We just left it to be a quiet, intimate scene for Dr. Azumi.

The great thing was that for the first few months of the job I had a chance to experiment with different synths, both hardware and software, spending some very long nights tweaking knobs and playing with settings at random, at the same time recording everything in to my Pro Tools or in to a portable recorder.

Since there were almost no rules to break, I was free to really go in to some bizarre creative ideas. For anyone who enjoys creating sounds, it was just like being a kid in a candy store.

I had hours of glitches, different drones and tone sweeps from those recordings. I took different parts of those recordings, chopped them in to little millisecond pieces or longer chunks and rearranged them at random like building blocks, to come up with something unusual — sometimes combining electronic bits with little bits of location recordings we did for the project. Then I pitched the new piece up or down and repeated the whole process again and again. It was a great opportunity and it was fun to discover and play with almost every plug-in and software and hardware synth that I could use on this.

Since there were almost no rules to break, I was free to really go in to some bizarre creative ideas. For anyone who enjoys creating sounds, it was just like being a kid in a candy store.

Of course about 80% of that was unusable, but what I got allowed me to build an extensive library in Soundminer that I could assign in Kontakt to different keys on a piano and play them live to picture, just to see what will work for each different electronic component.

Many of the Lab control room and Lab hallway sounds were “played” that way — live to picture. I would play them first and then rearrange them. It was a great way to start — to find out what sounds would work and what would not.

The control room also had a layer of nautical and marine sounds, like old submarine sonar beeps. We even did a recording session of various sounds of a didgeridoo, which we later processed heavily. That’s one of the low sounding drones in the control room. It’s a low end-y, tension-creating, vibrating type of sound, almost like an old ship horn.

When I started working on this project, our composer Dan Romer was still working on a different project and hadn’t started composing for Maniac yet. We had a temp score, but many scenes were quiet and naked, so we were creating a lot of temp drones and musical-type soundscapes to fill some of those gaps and to help with transitions.

But even later when we got Dan’s wonderful music, many of those sounds surprisingly worked with the score or sometimes we were able to move them to a different scene. Or sometimes I would reuse them for a different element.

There was a video they played for the participants in the common room before the tests began. It had the doctors talking about what the A, B, and C pills would do. Knowing that director Fukunaga likes to have practical effects, was that video something they created and played back on-set? What went into that video?

MG: So the idea was that video was a pretty worn-out VHS tape that gets played over and over as new subjects come to the drug study.

That video was always treated as a separate piece; it had a separate director and originally it was 8 to 10 minutes long. It went through many changes during the process. Visuals were changing drastically from version to version, but it was a relatively quick process for sound design. I just found some old sound effects libraries from the ‘80s, normally used back then for a title design.

The actors voices’ were recorded on location, but some of them were re-recorded later using an iPhone, with the intention to replace them in ADR. But Cary really liked that poor quality, so we kept that in

We used some in-your-face, obnoxious whooshes and computer blips and kitschy sounding 8-bit sounds to match the low quality and production values of the tape. The actors voices’ were recorded on location, but some of them were re-recorded later using an iPhone, with the intention to replace them in ADR. But Cary really liked that poor quality, so we kept that in. And my co-supervising sound editor Alexa Zimmerman, who dealt with dialogue and ADR on the show, shifted sync in some places to make it more distinct and more off. Again, it’s that same motive of an old worn out piece of equipment.

Because the video was created later on, it was actually not being played back on-set for that scene. Our re-recording mixer Martin Czembor mixed it in and “placed” it nicely in the room.

How did you create the voice of the GRTA smart computer?

MG: Throughout the editing process, as with all the processed voices in the show, we were using temp voices at first. I was playing around re-recording those through different speakers and through different plug-ins (Razor, VocalSynth, and Speakerphone).

But later when we got Sally Field in the ADR studio, we re-recorded all the lines for the GRTA with her voice, since the GRTA is based on Sally Field’s character Dr. Greta Mantleray. After applying the same processing, Sally Field’s voice sounded different and not as interesting as the temp voice. It was over-processed and we were losing the idea that the GRTA computer is based on Dr. Greta Mantleray.

Working with Alexa Zimmerman, we created a few versions of the GRTA readings with a different amount of processing applied to each. And our mixer Martin Czembor had those on a few different faders and he was weaving them in and out, changing the amount of the effect as needed. That seemed to work the best; we could do it on the fly on the stage and in the end we were using less of the original processing on her voice, but it just sounds right.

How about other GRTA sounds — like when they switch the computer on for the first time and the camera takes us inside to see what’s happening. Can you tell me about that sound sequence?

MG: Many of those sounds were from the sound experiments that I did early-on. And like everything else, the shots and visual effects were changing and the scene was getting shorter. The sounds there were a combination of analog and mechanical sounds, like circuit sounds. I had some toys circuit bend by a guy named FASTMATT in Texas. He takes different toys and electronic gadgets, opens them up and changes the circuits a little bit by applying resistors or different electrical components to tweak and change the sounds. He circuit bent a Speak & Read for me and I used many sounds out of that, for everything from bicycles and cars on the street to being inside the GRTA computer.

He takes different toys and electronic gadgets, opens them up and changes the circuits a little bit by applying resistors or different electrical components to tweak and change the sounds

As the camera goes through the computer, we are following the path into the anechoic room where the test subjects are having their pill experiences and so the sound gets very quiet in there.

For the mechanical parts and some sounds for the GRTA computer’s insides, we used our location recordings. Matt Rigby, Isaac Derfel, and I went after closing to the Hall of Science in Queens NY. There, we recorded both tones and specific mechanical sounds with contact and regular microphones. Some of the clicking, blinking lights and “computing” sounds for close-ups of the GRTA are sounds of falling ball bearings in a probability machine. That’s also the sound for many of the buttons in the control room as well. We recorded some heating elements from various heaters since the inside of the GRTA computer is very analog and mechanical with some electromagnetic components. Just to remind you, there should be no microprocessors in that world.

For some of the circuit sounds and going through the GRTA design, I sent sounds to my reel to reel 8-track recorder that I kept from the old days. I played with the tape tension to get that nice, slight pitch change, or I would purposely damage a piece of analog tape and then record on it and re-record it back, to get some unusual, random and sometimes interesting effects for the sounds of the GRTA malfunctioning and melting.

There were so many possibilities for sound on Maniac, especially when they get into the pill experiences. There was everything from a shootout in a fur store to an elfin adventure. There was a full moon séance, and a mobster-themed experience where Owen’s dad was drilling into people’s skulls. They cover a lot of genres. What was your favorite pill experience in terms of sound? Why?

MG: It’s really hard to just have one favorite pill experience, since all of them are completely different.

Working on those was almost like working on a few different films. And Cary wanted to have them all sounding unique.

The elf world in Ep. 8 “The Lake of the Clouds” was fun to create because that one was a completely different environment from the rest of the show. My sound effects editor John Warner and I were looking for unusual animal sounds to create a mystical atmosphere with howling and different calls in the distance. I think we used howling monkeys, gorillas, and some unusual jungle animals. Also, one of my favorite eerie sounds is a distant recording of a red fox howling. We modified them using Dehumaniser and Reformer PRO (by Krotos). We used that often on the animals.



As a sound designer, I always enjoy working with wind sounds. I was able to play with winds on this one because it’s a big, open space. It’s either late fall or winter, and so you don’t have any insects. But we had carte blanche for this fantasy world. We could use whatever we wanted.

We used a plug-in called Envy (by Cargo Cult, makers of Conformalizer and Spanner). This plug-in came out while we were working on the show and so we decided to try it out. It allows you to morph different sounds in a slightly different way from other plug-ins. You use the characteristics of one sound and apply those to a different sound.

John [Werner] did an interesting design for the burning tree in the Elf episode. We used recordings of tree trunks twisting and various wood sounds and combined them with fire, embers, and crackling using Envy. So the tree is not only burning but it seems to be alive, moving and shifting.

Another interesting episode to work on was Owen and Annie’s mental trip to Long Island in the ‘80s, Ep. 4 “Furs by Sebastian.” There was a big gun fight in the fur store and that was tricky to work on. Cary is always very particular about his gun sounds. As we were working on that scene, we were getting more and more VFX. Cary was trying to do an over-the-top gun fight, very tongue-in-cheek. So they were constantly adding more bullet hits and muzzle flashes. It was so gory and over-the-top. I was constantly asking Jay [Peck] and Igor [Nikolic] (our Foley team) for more blood and more bloody impacts and blood splatters and squirts. I was asking for more debris for the ceiling falling down.

The drill was matching the action but music was taking away the impact and Cary wanted it to feel like the drill was drilling into the audience’s skull

For Owen’s mobster mental trip in Ep. 7 “Ceci N’est Pas Une Drill,” where his father is using a drill to drill into people’s skulls, we had to recut the sound of the drill that we used because after we added music to the scene, the sound wasn’t powerful enough. The drill was matching the action but music was taking away the impact and Cary wanted it to feel like the drill was drilling into the audience’s skull. So we had to recut this on the mix stage with the new music. A lot of that was also done in Foley, like the shaking of the chair, the vibrations of the drill on the chair, and the vibrations on the floor. The Foley team did a great job on that and on the blood. Every time they thought I was done asking for more blood and gore, I’d be calling them up.

Another helpful resource for the blood was a library called Gore from SoundMorph.

Ep. 9 “Utangatta” was definitely the craziest. Inside the pill experience, Owen and Annie are in the UN building and ‘Snorri’ (aka, Owen) is on trial for accidentally exploding an alien. There’s a shootout in the hallway. The elevator takes ‘Snorri’ and Annie to other people’s pill experiences — all while the GRTA is malfunctioning and alarms are going off back in the lab. What were some of the challenges for you on this episode?

MG: Following Cary’s directions on this one, we tried to achieve an underwater, underground feel in all the different rooms during the meltdown — kind of like a failing submarine. John [Werner] and I used some library recordings of submarines that we layered as tones and backgrounds, plus we created some nice deep tones in NI Absynth that are pulsating and weaving back and forth. I used my didgeridoo recordings from the session before, which gave it a rather unsettling feel. Cary really liked that sound, as he is prone to like low end a lot.

On top of that we layered different metal creaks that we processed and pitched down. But with all of that we were trying not to get too dark since this is still a black comedy. So to balance that depressing gloominess a little, we are sometimes using unexpected almost cartoonish effects to bring a little humor — like the wand that Annie is using in the elevator to remove the tracking device from Owen.

To balance that depressing gloominess a little, we are sometimes using unexpected almost cartoonish effects to bring a little humor

That was the last episode that we mixed, and we were running a little out of time. Some visual effects were coming last minute to the stage, and we had to reconfigure or sometimes even change different designs.

That big gun shootout in the hallway, which is Cary’s signature one long tracking shot (he had a 6-minute long one in True Detective), had many changes especially with the bullet hits and gun flashes. Sometimes we got a new version at the mix with just a few frame adjustments to either the hits or muzzle flashes, so each time we had to go through a whole scene, frame by frame, and double check all the sync. And I think this scene is almost two minutes long.

For the metallic Rubik’s Cube that Owen is using to decode the system, we only had a chance to see rough concept art sketches of how that will look and how it will glow. So we did our sound design according to that but in the end, the final VFX were different, and the timings of the glowing pieces were also different, so we had to quickly redesign that scene on the dub stage as well.

What is one thing you’d want other sound pros to know about your work on Maniac?

MG: I’d want them to know that this was a huge team effort. This was one of the craziest and most fun projects that I was associated with so far.

For a TV show, Maniac was unique because, from the beginning, it was conceived almost like a very long feature film.

To design and create so many unusual sounds, with constantly changing visual effects and a relatively limited TV schedule, was a big challenge. We had many editors jumping in and helping out because we were running out of time on particular episodes.

Martin Czembor — our wonderful re-recording mixer, who I’ve been working with a lot for the past 20 years — was basically living in the mixing room.

We were able to get great sound effects editors like John Werner, Allan Zaleski and Chris Foster to come on board and in a very short time help us get ready for the mix.

We were so lucky that Ruy Garcia was able to jump in during the middle of the project despite his very busy schedule and help us tremendously by premixing sound effects as we were running out of time.

Alexa Zimmerman and Tony Martinez both worked on the dialogue. Many of the dialogue scenes were very tough for technical reasons, especially Owen’s mobster episode (Ep. 7 “Ceci N’est Pas Une Drill”). Owen was playing a gangster, wearing a grill in his mouth, and purposely slurring his words, which made it difficult to understand him. The idea was to ADR those lines but for one reason or another we couldn’t get the ADR. There was a big concern from the producers and Netflix about the intelligibility in those scenes. But Tony Martinez worked on those and he did an amazing job on the dialogue. What we ended up with compared to where we started was like night and day.

Everybody involved with this project was just tremendous. Jay Rubin and the Postwork/Technicolor NY family, where we mixed the show, were just so accommodating.

Every show and film is a huge team effort and without a great crew and wonderful people behind the project you will never have results as good as you hoped for!!



A big thanks to Mariusz Glabinski at team for sharing the story behind the sound for Maniac – and to Jennifer Walden for the interview!

Please share this:

FOLLOW OR SUBSCRIBE FOR THE LATEST IN FANTASTIC SOUND: