Obsidian Entertainment’s The Outer Worlds was released a short while ago and fans are loving it — from the characters and quests, to the fascinating locations. The sound team, led by Audio Director/Composer Justin E. Bell, played a huge role in bringing this dystopian space adventure to life. Here, Bell and his team talk about designing an interactive dialogue system, world-building, creature and weapon design, music, UX design, and more. They hold nothing back in this in-depth interview - the biggest-ever in A Sound Effect history, and packed with behind-the-scenes videos and audio examples too. Enjoy! Interview by Jennifer Walden, photos courtesy of Obsidian Entertainment Please share:

Interview by Jennifer Walden, photos courtesy of Obsidian Entertainment

Obsidian Entertainment’s The Outer Worlds — now out on PS4, Xbox One, and PC — is a first-person RPG set is an alternate future where megacorporations are in the business of colonizing and terraforming distant planets. It’s not an easy endeavor; space is full of alien threats and vigilante justice. The player has to navigate the complex associations of various factions and his/her choices change the path of the narrative.

Building an alternate future in a distant part of space opens the flood gates of creativity. Anything is possible and everything needs a unique sound — all the explorable locations and the inhabitants and all the tech and weapons needed to survive there. Here, audio director/composer Justin E. Bell and his sound team share details on how they crafted the ambiences of key destinations on Terra 2 and Monarch, how they designed the weapons and combatants (from living creatures to automechanicals), their approach to UI sounds, implementation, music, and more!





Please introduce your sound team, and what their roles were on The Outer Worlds.

Justin E. Bell (JB): There were ten people on the team all told, the largest internal team in Obsidian’s 16 year history! We had six sound designers, one technical designer, two producers, and one lead/director. Everyone had a focus that they essentially owned, meaning they drove the entire design process from conception, sound design, through to implementation and optimization.

The breakdown of responsibility was something like this:

• Ali Mohsini, Sound Designer – Cutscenes, Foley, UI

• Dylan Hairston, Sound Designer – ambiences and emitters

• Jerrick Flores, Technical Designer – AI dialog, spatialization, optimization

• John Lee, Senior Producer – AI dialog, optimization

• Justin E. Bell, Audio Director – composer, lead, mixing, optimization

• Mark Rios, Assistant Sound Designer – spatialization, ambient animations

• Renzo Heredia, Associate Sound Designer – inventory UI, interactables, scripted events

• Scott Gilmore, Senior Sound Designer – creatures, gameplay

• Tony Blackwell, Audio Producer – everything except optimization

• Zachary Simon, Senior Sound Designer – weapons, Foley





Creating a Unique Sounding Sci-fi Game

What was the overall direction for sound on The Outer Worlds? How does the sound differ from other sci-fi / dystopian space games?

JB: We wanted the soundscape of The Outer Worlds to feel realistic, organic, well-worn, and inhabited. Not only did it have to sound realistic, but it had to behave that way as well. Sound is everywhere you turn! Electronic equipment placed in the world hums with electricity, toilets leak, lights buzz, and scaffolding groans. Wind gusts and wallas spill through open doors and become silent when they’re closed. Every object in the world emits its own unique sound.

Not only did it have to sound realistic, but it had to behave that way as well…Every object in the world emits its own unique sound.

AI character dialog (what we call “chatter”) provides moment to moment feedback that feels tonally and contextually appropriate, useful, and tactical in ways that both immerse the player and help them play the game better. Interactable objects have the appropriate weight and cadence to them, whether the player is opening and closing doors, or hoarding every last collectible item they can find. Weapons pack a transient punch, are drenched in sci-fi details and empower the player, and their reports sound appropriate to the environment they’re in.

We also wanted to fill the player with a sense of adventure and wonder. Creatures can be heard in the distance, telegraphing their presence, and when confronted they aggressively challenge the player with a vicious roar. Ambient music instills a sense of grandeur and mystery to the world, inviting the player to explore as the story unfolds before them.





Creating Location Sounds

Let’s look at some of the locations in The Halcyon System. What does Emerald Vale sound like? Edgewater? Byzantium? Roseway? What went into the sound of those locations?

Dylan Hairston (DH): In terms of ambiences and emitters, Emerald Vale is the first area that our player gets explore, so I tried my best to make the ambience of this area invite the player to do just that. I wanted them to be intrigued by all the various birds and beasts they will hear occasionally in the distance. You can hear distant gunshots and battles, alien birds, and even some worldized sounds for a creature in our game called “Pterorays.” While I did include some fearsome large creature sounds into this area, I was sure to not let them dominate the soundscape. Overall, I wanted Emerald Vale to garner a sense of curiosity more than a sense of danger or fear. This is reinforced by the changes to the ambience during night time. I made it much calmer and quieter to allow the player to really take in the beauty of the world around them and feel peaceful in a sense.

I wanted Byzantium to showcase the sort of faux ritzy and glamorous life that its citizens lived. While there, you can hear lofty walla, busy sirens and ships in the distance, and even some out of tune, lazily played jazz! I wanted it to give off an impression of almost like “A Vegas strip that is so dilapidated, but trying its best to put on a good show that everything is still okay.” Even the birds there are just a little… off. I ran a lot of bird samples through a crazy H-Delay setting that made them feel just slightly unnatural. The ambience bed itself is more of an eerie low drone, as opposed to a windy, lively soundscape. I wanted the tall buildings of the corporate overlords to almost seem like they’re looming over the player — sucking the life out of the city.

Roseway has this whole story around it of how it’s been sort of overrun by Raptidons. So I wanted the wilderness around this area to be very quiet, so the player can take in all the cues around them of where the enemy wildlife is. You can hear distant beasts that sound dangerous, a different flavor of alien birds, and at nighttime, you can hear some small critters come out and add a new layer to the soundscape that helps the area feel even more foreign. As for the town in the middle of this jungle, I wanted it to feel like a bit of a refuge from this hostile exterior. The walla is more dense than it seems like it should be for the area, and its tone is relatively neutral. I could have made this area feel very doom and gloom, but I didn’t want to because even though there is a great threat to the town, there is still an underlying tone of humor spread throughout this place. So I didn’t want this to be a place that ends up taking itself too seriously, but then you hear two of its citizens bantering about Halcyon’s favorite sport: Tossball.

JB: As for music, each area has its own unique tone that’s both appropriate to the environment and narrative context. Much of the music was performed by a live orchestra in Budapest, Hungary, though a fair amount was sampled based as well. VSTi sample libraries I returned to frequently in this score are Spitfire Symphonic Strings, Cinematic Studio Brass, Vienna Symphonic Library Woodwinds, ProjectSAM True Strike Percussion, Audiobro Genesis Children’s Choir, and U-he Zebra/Dark Zebra.

As Dylan mentioned above, Emerald Vale is the first area of the game, and we really needed to give a strong sense of adventure and wonder to build player investment. The ambient music for this area is at once very moody and evocative. Melodies are played by contra-bass flute, and also by violins playing non-vibrato for a stark, frozen, soaring texture. These phrases are accompanied by big harmonic statements in very low strings and brass that lend a sense of grandeur. This piece of music has lots of negative space between phrases to allow for an interchange between music and environment sounds. Slight variations of this piece of music can be heard if you explore the player ship or the Botanical Lab (i.e. deserter camp), and they transition smoothly, in time with the main version that’s heard everywhere else so that you don’t really hear the transition when it happens.







Edgewater, the Spacer’s Choice corporate town, has a completely different vibe. Solo violin pedal tones accompany a low, smoky, slide guitar riff, while distant harmonicas wail plaintively in the background. Its ending is somewhat a surprise, with a pretty and simple piano melody. It’s got a distinctly western feel meant to highlight the fact that this town and its inhabitants are hanging on for survival in a hostile wilderness.



At some point in the story, the player has to make a choice, and suffice it to say, the fate of one or two communities rests in the player’s hands. When the player makes this choice and revisits the affected area, the music completely changes, and is swapped out with a solo contra flute playing the melody of the main theme. This required lots of scripting in Unreal to ensure that the correct version of the music played depending on the actions of the player.

Byzantium is the wealthy capital of the Terra 2, and home to the Halcyon Holdings Corporation headquarters. I tried to give the music an elegant feeling with pulsing, romantic sounding string melodies and harmonies. When you visit any one of the interiors that requires a loading screen, the music will seamlessly add a new section that doesn’t play in the city itself. This new section is comprised of a metallic sounding arpeggio that has a bustling quality to it, as if to signify the hustle and bustle of a high tech business.

Roseway is a struggling outpost on a very hostile jungle planet. It has a small town that’s constantly under siege by the local apex predator, the Raptidon. The music for the jungle is desolate and minimal, consisting of ethereal granular pads that billow like slow moving, layered clouds of dust. The intent was to highlight that this was a dangerous place, not meant for humans to tame. The town by contrast has a more synthesized sound, consisting of a mellow arpeggio, pointillistically highlighting the pitches of a major seventh chord. This piece of music is, coincidentally, the first piece of music written for the game during our vertical slice back in the summer of 2017!

How about the inhospitable/dangerous ‘moon’ Monarch? What went into the sound of locations here, like Fallbrook? Cascadia? Stellar Bay? C&P Boarst Factory? Amber Heights?

DH: Monarch was an especially tricky area to do the ambience for, and undoubtedly was the most iterated upon of all places in our game. What made it so tricky was that there was this powerful wind storm constantly blowing in Monarch. To give some context, Monarch is the largest playable area in our game, and the player could be exploring this area for hours at a time if they choose. That said, I wanted to be very cautious to not make a bed that would be fatiguing or annoying, but still service the visual of this intense wind — not easy to do. I ended up creating a bed that sounded like a windy desert, but was able to mix well with the music of the area. Monarch was certainly the most challenging planet as well, and the player is warned multiple times prior to visiting the planet that it’s extremely dangerous. I wanted to lean into that and make them feel like this area was inhospitable. I essentially broke the fauna of this area into two areas: birds of prey and monsters. These two types of sounds help make the player feel on edge and constantly remind them “This area is dangerous, and if you’re not careful, it will kill you.” At night time, I added a few alien owl type sounds that help add to the mystery of the area.

JB: Much like the planet itself, the music for Monarch is very primal in its nature and consists of distant-sounding, overblown solo contra-bass flute, improvising a small segment of the main theme. The texture is extremely mystical and mysterious, sounding like fractured beams of prismatic light, ebbing and flowing like the winds of Monarch. Low Flute specialist Peter Sheridan lent his incredible talent to recording this and other portions of the score where very low flutes were used. Here’s a picture of Peter with his rare Sub-Contra Bass Flute that was used in the end section of the game.

There are other locations to visit, too, like the Groundbreaker colony ship. What was your approach there?

DH: The Groundbreaker is one of the very few places not controlled by The Board, and acts as a sort of hustling, bustling area where the player has the option to make allies, enemies, and black market dealings.

The most interesting area is this large promenade that has shops all along the edges and advertisement holograms everywhere. It acts as the heart of this area that provides pathways to just about every place of interest on the Groundbreaker. The defining feature of this room is its size, so I made worldized versions of the holograms that would echo through this room and be reinforcing this overtly capitalist world. There’s also a worldized walla that helps sell the cavernous shape of the room. You can also hear machinery locomotion, metal clanking, and distant starship engines. This all comes together to help shape a believable space that sounds busy and helps immerse the player in this area.

Regular Hologram Emitter:



Worldized Hologram Emitter:



JB: In addition to what Dylan mentioned, the Groundbreaker is a special class of ship built to transport hundreds of thousands of passengers to the Halcyon colony via interstellar travel. It’s a massive ship, one of humanity’s great technological achievements. This is one of the sampled pieces of music, and consists of a majestic sounding solo horn melody, followed by nostalgic children’s choir. All to embody what humanity is capable of.

The Hope is a similar kind of ship as the Groundbreaker. It’s the place depicted in the intro cinematic of the game, and it’s where Phineas finds the player. Music on the Hope is brooding yet (ironically) hopeful, with ebbing and flowing strings and low brass. It ends with a callback to the main theme as played by the solo horn, this time presented in stark, foreboding terms.

Need specific sound effects? Try a search below:



What went into the sound of the asteroid Scylla?

JB: Scylla is an asteroid that was surrounded by sort of a “bubble.” I wanted it to seem very baron and eerie, so I made the bed be mostly a low drone, to reiterate to the player that you’re basically out in space right now — protected only by this thin bubble. But to give it an interesting alien texture, I added a lot of these one shots that were dry ice against metal but pitched down by one to two octaves, as well as some distant explosion sounds through stylized reverbs and delays. I also got some synth pads and created very tonal, moody textures that help sell the fact that you’re in outer space. These in a way act as very subtle underscore. They’re not in any way diegetic, they’re just there to help get the player in a certain mood, and immerse them in the experience.

I wanted [Scylla] to seem very baron and eerie, so I made the bed be mostly a low drone, to reiterate to the player that you’re basically out in space right now — protected only by this thin bubble.

There’s also the massive Terraformer in the middle of the asteroid. I knew I wanted to make this a highlight of the area just because it was so massive. I gave it an idle loop sound which is an amalgamation of various types of engines run through a granular synth, then through a huge cave setting in Altiverb. There’s also some more of the dry ice stuff mentioned earlier but incorporated into this loop and processed slightly differently. Additionally, I added some gears, servos, etc. to make it feel like you’re truly standing under a huge, complex machine. It also does this pulse every 20-30 seconds that was mostly a motorcycle ran through Tonstrum’s “Traveler” granular synth module and then reverb/delay was added to make it sound massive. This sound was a ton of fun to work with!

Terraformer Start:



Terraformer End:



Any other space stations, asteroids, or points of interest you’d like to talk about?

JB: Our Dungeon areas in the game (which are essentially any interior space with bad guys, separated by a hard load) all took a similar flavor of the same kind of thing. I used our suite of roomtones and then I made a custom event for each dungeon area that had an array of one shots that would set the mood for the area.

For the sake of time, and the amount of content I needed to cover, I stuck with a modular method for doing the ambiences of our dungeon areas. Everything was reusable and it was just a matter of identifying how a level should make the player feel, and kitbashing things together to satisfy the emotional qualities of a scene. I think this method worked surprisingly well, not only on saving time, but making areas feel grounded and like our areas all belonged to the same universe while still sounding cool and distinct.

Our dungeons also are a highlight of the great amount of work we did on emitters for our game. I really hope people take time to appreciate the amount of detail we tried to get in our environmental audio.

Our dungeons also are a highlight of the great amount of work we did on emitters for our game. I really hope people take time to appreciate the amount of detail we tried to get in our environmental audio. In places like the Geothermal Plant in Emerald Vale, there are emitters everywhere! There are a total of around 500 emitters in there! I tried my best to actually make it feel like this place was alive and if there’s machinery, then it will make noise of some sort.





Designing Weapons and Varying Attacks

What are some of the weapons a player will encounter/use? What went into creating their sounds?

Zac Simon (ZS): Oh gosh, there are so many I don’t even know where to begin. We’ve got weapons that might feel familiar to people like the Spacer’s Choice Pistol and the T&L Assault Rifle all the way to some less familiar, unique weapons like The Gloop Gun that will shoot electrified globs of goo that cause your enemies to levitate on impact. I definitely don’t want to spoil too many of our weapons for people that haven’t played it but if you are familiar with shooter games you will feel right at home.

As far as creation goes, fairly early on I created a sort of “sound design recipe” that I like to follow in order to keep the weapons sounding consistent. It’s really just a list of layers that each gun should have. The recipe consisted of:

• Lead in – The lead in should be a very quick sound right before the actual thump and crack of the gun. Think of this as the mechanics of pulling the trigger and the hammer beginning to move. It happens just a split second before the gun actually goes off.



• Thump – This layer is typically short, deliberate, and often synthetic. We tended to like to use a pure tone between 60-90 Hz and make it as short as 15ms. Sometimes we would crop it down to just a single cycle of a low frequency tone. Enforcer (by BOOM Library) is a great plug-in for helping with these types of sounds.



• Crack – Very fast and deliberate and no longer than 100ms. Imagine all of the gunpowder being discharged from the barrel of the gun in an instant. Find something with a very loud transient and just snip the very beginning of it, then put it through a limiter to remove any dynamic range (this will come from the mixing of all these elements together).



• Body – This is where the meat of the sound is. This also gives the gun its uniqueness and character. This layer tends to be comprised of several layers. We generally split them up by frequency content (low, mid and high).



• Mechanics – We found that this is one of the most important parts to making it feel like you are holding the gun. This is the high end metal/plastic/material detail of the weapon in the player’s hand. We tended to record source that had a very close perspective for this.



Here’s the final weapon in-game:

A couple of rules I discovered along the way:

• Always stagger your layers, especially the crack and thump. You don’t want all of the layers competing for head room at the same time, as they will end up competing with one another.

• Compress the more dynamic elements so you have more control of the balance between them.

• The thump and the crack should be the loudest parts of the sound. This is what makes the weapon feel powerful



Always stagger your layers, especially the crack and thump. You don’t want all of the layers competing for head room at the same time, as they will end up competing with one another.

From a creative standpoint, how did you make the weapons in The Outer Worlds feel fresh and original?

ZS: When we started working on the weapons of The Outer Worlds we knew straight away what the end goal was — realistic and unique sounding weapons that make the player feel awesome while playing.

We started off by recording as much source as we could, which mostly meant mechanical sounds of airsoft guns as this is what we had easy access to. In order to supplement our own source recording, we used additional gun libraries.



We then did as much research as possible about the science of guns. I think this is an important question to ask: “What am I trying to recreate with my sound?” For us, understanding what a gun is and how it works gave us perspective and a great starting point for how to approach the sound design.

Circling back to the original question of how we made the weapons for The Outer Worlds feel fresh and original, I like to think that the design of the weapons did that for us. This is a sci-fi game so some of the more unique and other worldly sci-fi weapons in our game have no point of comparison to reality which gives us, the sound designers, full control over building the audience’s expectation of what that thing should sound like in The Outer Worlds universe. During the process of creating audience expectation, the originality just comes into play naturally.



What’s your method for adding variation into the attack sounds?

ZS: From the get go, it was pretty clear that we wanted to go for a modular/layered approach to the weapons. This allowed us to get the most variation while still maintaining a high level of quality. Anytime a gun is fired the game has to ask questions like: “Am I inside or outside?” and “Am I in a big room or a small room?” and “Do I have a damage-type mod attached” and “Am I the player or am I a computer/bot?”

This system creates a huge amount of variation with how the gun sounds at any given time while also creating an always-changing soundscape that is reacting to the player’s choices.

Depending on how these questions are handled by the game it will selectively choose the correct layers to play. Once they all come together, they create a full weapon sound. As far as the actual design of the gun sound itself, well it’s also somewhat of a modular approach.

• Base – This is the layer that gives the weapon most of its character and always plays when the weapon is fired

• Exterior Tail – Only gets played when outdoors to give the sense of a big open area when the weapon is fired

• Interior Tail – This layer is only heard when the weapon is fired indoors and has three different versions: small, medium, and large. This gives a sense of the weapon being fired in different-sized rooms

• Damage Type – In The Outer Worlds you can modify your weapon to do different damage types like shock, corrosive, or plasma. If you have one of these attached, it will layer on an appropriate sound to match the damage type

• Distant – This layer will play when the enemy firing at you is at a distance to give the sense that the gunshot is happening further away from you

• Shell Casings – When you fire a weapon that ejects shell casings you will actually hear them hit the floor and the sound will actually change depending on what material the player is standing on

• Tactical Time Dilation – Lastly, The Outer Worlds gives the player a special ability to trigger slow motion for a few seconds at a time. When the weapon is fired in this mode, we completely swap out the regular firing sound with a more stylized version to match the slow motion effects

This system creates a huge amount of variation with how the gun sounds at any given time while also creating an always-changing soundscape that is reacting to the player’s choices.





Creating Combatants and Creature Sounds

How about combatants? What are some threats players will face? What went into the creation of those sounds?

Scott Gilmore (SG): Players in The Outer Worlds will encounter a variety of threats depending on where they are in each planet and on how they choose to define their alliances.

Human combatants are present in the form of lawless marauders, corporate mercenaries and rebel-faction revolutionaries all equipped with a variety of weapons.

Robots (aka auto-mechanicals) can be found fighting alongside their associated factions and feature varying builds, abilities, and weapons. Each environment also includes various creatures with unique abilities, attack styles, and weaknesses.

Humans, robots, and creatures can be found throughout the game in many different combinations and require different combat strategies and play-styles. The overarching goal for all combat encounters was to deliver useful gameplay feedback to the player in the form of evocative audio.

The overarching goal for all combat encounters was to deliver useful gameplay feedback to the player in the form of evocative audio.

Human enemies use many different weapon loadouts that our weapon sound designer Zac Simon worked on. Using Wwise, Zac also made implementation adjustments to help differentiate player and NPC guns. Additionally, Human NPC enemies also trigger distant gunfire sounds depending on how far away they are during combat. All of these factors really come together to support the mix during combat and retain the sense of empowerment reserved for the player character.

Our technical designer Jerrick Flores and audio director Justin Bell collaborated to help design and implement enemy combat chatter.

Since we had a lot of humanoid enemies throughout the game we needed a way to add diversity to all of those humanoids’ combat chatter. While we could have unique lines for specific characters, we could not give unique lines to every single character for obvious reasons (we had over 700+ characters!), thus we worked with the narrative team to establish what we called “Character Groupings.” These gave us broad groupings through which we introduced differences in chatter to create an assorted variety of combat responses.

After that, we then assigned a Character Grouping per character, which in turn dictated what variations of chatter that character could use for an event, and even broader, what events those characters could use.

To dive a bit deeper on that, we separated the majority of character groupings into “Townie” and “Guard” main types, then further into subtypes based on faction grouping, like “Townie Byzantium/Townie Hoodlum” or “Guard Mardet/Guard Iconoclast.”

Main types determined what events that character could use, such as Guards having access to more combat chatter events regarding callouts such as reloading or reposioning, or Townies having access to only basic combat chatter (start, end, etc.).



Subtypes then influenced what variations of lines the character could use for an event. For example, for an event like when the character starts combat by attacking, a Guard Mardet may have more formal soldier-esk line variants than a Guard Iconoclast, who sounds more informal and anarchical:



These unique lines were then combined with generic lines shared across some subtypes and across all subtypes to add some genericness so that these unique lines don’t become annoying and overdone over the course of the game.

This structure of Character Groupings was then applied to all characters and expanded to include all the subtypes we and the writers thought were necessary to characterize humanoid enemies throughout the game. Overall, we ended up with 11 Character Grouping subtypes and thus 11 different sets of chatter for the game!

We then got our group of voice talent and assigned them certain sets of that chatter to create a voiceprint chatter file, which was a file that housed all the structural chatter data and .wav data needed to play chatter VO for a specific voice. We added some scripting so that only characters with the appropriate Character Grouping could play their relevant lines on a voiceprint chatter file. Then we shared that voiceprint chatter file amongst all the characters that were assigned to that voiceprint. This then voiced all those hundreds of generic characters throughout the game with different variations in lines and in voice acting so that we could have a very diverse chatter presentation.

…this Character Grouping system was a way for the whole team, interdepartmentally, to deliver a chatter experience that was consistent throughout the game, narratively diverse and unique, and easily digestible enough in the middle of combat to serve gameplay purposes.

In the end, this Character Grouping system was a way for the whole team, interdepartmentally, to deliver a chatter experience that was consistent throughout the game, narratively diverse and unique, and easily digestible enough in the middle of combat to serve gameplay purposes. It genuinely is a system that we believe has impacted the game experience positively and that we are proud to have made all together.

On The Outer Worlds fandom site, I saw some cool anatomy artwork for several creatures. Were you working with this material while creating the sound? How does knowing the internal workings of a creature help you in the sound creation process?

SG: Yes! Having access to both concept art and animations while starting to design audio for these characters was incredibly helpful. Our artists and designers provided us with lots of context and details about each creature’s behaviors and functions. When setting out to design robot and creature audio, I first began by considering their physiology, including their body construction, breathing cadence, and movement style. Once the creature or robot’s physiology, movement, and gameplay function started to point toward the kinds of sounds to incorporate, I would begin gathering source in the form of vocal and effects recording as well as designed elements generated through processing or synthesis. For example:

Scrap Bots are large, slow, and powerful robots so we opted for a heavy, sustained, metallic and synthetic sound for their vocals and effects.



Hover Bots are agile and observant robots so their sounds ended up being light, synthy, and dynamic with lots of tonal fluctuations.



Mantisaurs are extremely deadly with chittering mandibles, sharp exoskeletons, and jittery movements. Their sounds are comprised of lots of textured, non-vocal staccato and screeching elements.



Primals are radiated ape-like creatures with thick, rocky bodies and lumbering movements. Their sounds incorporated a healthy blend of heavily processed human and animal vocals to convey their familiar yet mutated nature.



By far, the design tool that saw the most action on creature work was our Sanken CO-100K microphone. This mic is renowned for capturing a wide frequency response (up to 100 kHz) that retains fidelity quite well when heavily pitch shifted. I found it most effective in capturing vocal material that I knew would likely be pitched up or down, but it is also great for recording a variety of effects and Foley. It can bring out lots of hard to hear, upper-harmonic detail in recordings when pitched down and provides ample frequency headroom to make ‘small’ things sounds huge when processed.

Sanken CO-100K:



Sanken CO-100K Scrap Bot source design:

Some of my favorite and most effective recordings came from a rubber dog toy, a hole puncher, a metal light fixture, an old car door, and a co-worker’s dog Louie (thanks Nicole and Louie!!).

Rubber dog toy:



Louie the dog:

I even did a few experiments with Zac’s Barcus-Berry contact microphone taped to my neck. I recorded myself performing some growls and vocalizations. I found that some of the most effective props and approaches for creature source are not always the most intuitive.

Barcus-Berry contact microphone:

Other microphones I used were the Sennheiser MKH 8040 and 800, typically in mid-side configuration. These are both great sounding and versatile mics. I used them primarily for capturing effects that benefitted from a stereo image such as wing flaps (created with shirts and a sheet) and attack swipes (created with tennis rackets and a violin bow).

Many different processing tools and plug-ins were used during editing and design. There are too many to list and everyone has their favorites but it’s worth mentioning a few that I found particularly useful. In general, an effort was made to incorporate performative control and dynamic processing (using MIDI controllers and automation) wherever possible. This can help add dynamism to static source as well as provide you with more mileage in the form of variation.

Zynaptiq Morph and Krotos’ Reformer Pro were used to combine/morph elements to create layers for final composite sounds. These tools merge the timbral qualities of an audio signal with the envelope of another. Morph can actually merge the audio of one track with the mixed sum of multiple other tracks. Reformer allows you to use a live microphone input as an envelope source.

I found both of these tools useful to create sounds with vocal-like contours but that can be comprised of any configuration of source elements. I used lots of friction recordings of skin, plastic and glass as well as various animal libraries and of course recordings of my own voice.

Morph plug-in source design:

Another tool that I found to be really effective was Xfer Records’ Serum synthesizer plug-in. I used this synth to create performable patches as a means to generate source for most of the robots in The Outer Worlds. I highly suggest exploring synth patches as an element if you are working on robot sound design. The ability to tweak the character of the sound is usually very deep and the ability to control key parameters with controller input allows you to create expressive material that can be incorporated as layers or designed further. Obviously you can apply this to any synth that you have access to; I worked with Serum because I think its wavetable engine is capable of both very digital and gritty sounds.

Serum Hover Bot synth source design:

I also got some good use out of an outboard vocoder that was given to me by a previous co-worker (The Warpfactory by Electrix; thanks Jack!). This is an older unit and there are a number of great vocoders on the market today but I found the sound of this box to be pretty cool and very tweakable. I utilized this tool and its live microphone input for a variety of robot effects.

Running a mic through the unit, I recorded simple phrases (‘Alert’ ‘Patrolling’ ‘Death’ etc.) to generate vocoded robotic vocal source that had a vaguely human quality but was still electronic. I would later use Morph to merge these elements with the synthetic source generated in Serum. The results ended up being the majority of the sound set for the Hover Bot.

Vocoder Hover Bot vocal source design:

The vocoder also came in handy for robot effects other than vocals. I used live mic capture of a small desk fan through the vocoder and a few guitar fx pedals to create the main element in the Hover Bot’s fan engine sound. This was helpful because it gave me very immediate control over the sound I was monitoring; if it wasn’t inspiring me, I just kept twisting knobs until it did!

I highly recommend that you try effecting and modulating live microphone signals while recording.

I highly recommend that you try effecting and modulating live microphone signals while recording. I found it to be a lot of fun, super hands-on, and full of sounds that I’m not sure I would have created through the usual methods of recording dry and experimenting with effects in a DAW later.

Recording a fan through a vocoder and guitar pedals:



Hover Bot fan engine in-game:





Designing UX/UI Sounds

What was your direction for the UI sounds? What about the player feedback sounds during gameplay, like when a player earns reputation or when a quest is added?

Ali Mohsini (AM): The UX was the first thing I touched on The Outer Worlds. It required an extensive audit and plenty of pre-production before starting design. The implementation portion was easy thanks to our incredible Programming team here at Obsidian. My design direction was influenced by the aesthetic and feel of The Outer Worlds: mysterious, reminiscent, inspiring, and gorgeous.

When sculpting the sound, I had to create consistency of experience for the player, making sure that every event worked in context, making sure that the audio didn’t interrupt the core player experience.

Level Up

The Level Up sound took a few attempts to get right. The first few attempts didn’t really sell the idea that the player was reaching a new awesome benchmark but after some iteration, we got closer to making it feel more epic in our world. We took inspiration from what other RPGs had done before us, and then worked to make something in the same vein but still unique.

Ultimately, we decided to go with a musical Level Up sound that used the main theme. This was then sweetened with additional synth sound effects and elements, incorporating mechanical door opens, filtered drum hits, synth source made with Razor (by Native Instruments), and some additional source processed using Traveler.



Reputation Increase

Reputation was a tricky one to get right and took a while to figure out. The first iterations didn’t sit right in the game and after getting the team’s feedback, I got it into a state where it gave the right impression. It was a mix of Razor, filtered drums, and a processed generator.



Reputation Decrease

The same approach was taken here — Razor, ambient drone source, metal scrapes, a bell, some filtered weapon Foley, with more filtered drums.



Renzo Heredia (RH): The goal for inventory items was to immerse the player into the Halcyon colony — to really feel like they are the unplanned variable, picking up and using all sorts of items in the various worlds that they travel to. These items become important for the player’s journey, and it’s important that we made each item sound as unique as they looked. The level of detail we went for allowed the world to come alive through the many objects that the player will come across.

Our overall sound design philosophy and goal for inventory items was to have them represent their respective objects as close as possible. If you pick up a can, we want the player to hear a can.

Every prop we used for each sound had to match the texture and quality of the object. We took pictures of each individual item and printed them out to take to the studio for reference, to make sure that what we recorded would match the visual.

Here are some guidelines we followed for achieving this goal when recording and designing each sound:

1) Subtlety!

a. Transients had to be soft and subtle, not hard and sharp.

b. Picking them up as a player would naturally, with slow-ish speed and not too quickly.

2) Distinct

a. Every sound needed to give very clear feedback.

b. We also tried to incorporate some sort of memorable “rhythm” to each sound.

3) Short

a. They had to be no more than a second long.

– One exception was the music box.

b. Succinct to prevent unnecessary feedback.

We recorded each object based on each sub-category. So for example, we’d have two different pieces of cloth for “Armor Medium” and record each one by making various takes of grabbing/moving/wrinkling, with the guidelines above in mind. The transient points show a good estimate of how many variations were recorded:



Here are pictures of the Foley room, where various props were used and brought into the recording studio:



Here are pictures from the recording studio Foley session:



Items included (but not limited to):

• Cans

• Doorknobs

• Chains

• Different types of clothing

• Plastic wraps

• A literal sword

• Wooden poles

• Floor tiles

• Water bottles

• Candy wrappers

• Boxes

• My messenger bag

• Door latches

• Metal springs

One of the most difficult aspects of inventory sound effects was being able to organize all of it. Every item needed an assigned gameplay tag, some of which I needed to add, and then each tag needed to be assigned to a sound. In Unreal, gameplay tags were assigned in this kind of manner, where a gameplay tag was assigned to an Audio Event:



The variety we were able to have with this audio data tool was incredible, but organizing it just right took some time since it was difficult to search for a sound or tag if we needed to go back and change a sound assignment when fixing a bug.

Making such a large variety of inventory sound effects for The Outer Worlds will be something we’ll always be proud of.





Designing the Dialog System

Sound-wise, what was the biggest challenge in working on this game? Technically? Creatively?

Tony Blackwell (TB): I think the biggest challenge was the dialog system. Because The Outer Worlds is a true RPG, that introduces a very deep element of choice for the player that is inherent to the genre. The moment you give that much player agency, you need to be able to account for the different possible outcomes and scenarios that can occur.

In our game, this meant managing and successfully mapping thousands of conversation nodes, across the wide range of characters, in a way that would produce a natural outcome to the player.

To try and manage this, we use an internal proprietary tool that gives us the ability to keep track of conversations and how the audio is being triggered.

The example below shows only about 25% of a conversation with a single major NPC during a single specific encounter in the game.



For each box, or “node,” there are potentially multiple VO lines spoken by a single character and the branching represents the different potential paths the conversation could take. The different colors represent things like conditional checks, for example to check if the player has a high enough Persuasion skill level to go down a specific branch of dialog. When you consider that this is only a single conversation, it gives you an idea of the scope of the technical challenge we had in creating a dialog system that was adaptable to the player, whilst still feeling natural. For those that love statistics, our chatter system comprised of around 35,000 individual lines of dialog, whilst our main conversation system was around 70,000 lines.

For those that love statistics, our chatter system comprised of around 35,000 individual lines of dialog, whilst our main conversation system was around 70,000 lines.

Chatter and Voiceprints

A particular feature that was a huge creative challenge was how to make reactive chatter happen across our six companions, when you can have up to two at once. We wanted them to feel like real people (or robots), who were reacting to you as a player and to each other in a way that seemed natural. A great example of this is shown in the interaction between S.A.M and Vicar Max. This kind of interaction will only happen based on certain conditional checks, and when the player is questing with those specific companions.

We also had the challenge of making the planets, towns, and cities of Halcyon seem alive with the people that you come across. To do this, we used a combination of what we called “Barkstrings” and Voiceprints. The Barkstrings were VO lines that play when the player comes within a certain radius of an NPC, and we used them to give additional world building lore, and to give a sense of the residents going about their daily lives.

Voiceprints allowed us to effectively give every minor NPC in the game some form of dialog. To have a custom line for every NPC was outside the scope of our budget (and would have added many thousands of additional lines to uniquely cast and record), so the creative solution that we developed was to have Voiceprints.

Each Voiceprint contained a generic set of lines that could then be used across multiple NPC’s across the game. We ended up using 32 separate Voiceprints (16 Male, 16 Female), around 700+ lines each, which covered a different vocal performance and delivery style. The lines ranged from exertions, to reactive lines that would, for example, be triggered when in combat. The creative challenge was how to go about taking these Voiceprint lines and applying them to all of our NPCs in a way that still made them feel unique and non-repetitive.

Here’s a high level overview of how the Voiceprint Chatter system worked:







Audio in UE4

The Outer Worlds was created using Unreal Engine 4. Sound-wise, was this a good fit for the audio department? Why or why not?

Jerrick Flores (JF): Using Unreal Engine 4 (UE4) as our game engine was beneficial for the audio department; through various features available in UE4, we decoupled ourselves from dependencies on other departments and were able to develop the majority of the time in a self-sufficient and independent manner.

…through various features available in UE4, we decoupled ourselves from dependencies on other departments and were able to develop the majority of the time in a self-sufficient and independent manner.

Namely, two great features in UE4 that we used heavily were Sub Levels and Blutilities (aka Editor Utilities):

Sub Levels

All areas in The Outer Worlds were made of many Sub Levels. The type of file that makes up either of these levels, a .umap, was exclusive checkout due to it being a binary file, meaning that only one person could work on a level at a time. As such, if audio work affected emitters or data that existed on a map another person on the team used, that work created blocking complications due to the sole reason that two individuals could not work on the same level at the same time.

It was common for these blockages to occur on audio work that had to be done on art or scripting Sub Levels, as those were in a constant state of edit. To work around this, we created new audio Sub Levels to these areas and then grafted nearly all the audio data for an area — ambient emitters, volumes for ambiances and spatialization, audio scripting logic, etc. — into them to separate all the audio data out from other Sub Levels:



This separation then let us work completely independently from those other maps and even allowed us to easily filter what we needed to work on:



[GIF:Q9_2_ActorFilterWithCursor.gif]

With a completely isolated work flow, we now only had to manage and communicate about internal audio department conflicts with work on the same map as opposed to trying to organize with the whole development team. This further helped us down the line when automated build machines needed to work on all Sub Levels to build things, like lighting. Since audio data was decoupled from anything that a build machine actually needed to touch, we could work on files without any concern that our work would somehow affect the build machines’ work. Without this kind of separation provided by UE4, development would have been really slow going due to developers’ time cycles being wasted waiting for the maps to get checked back in.

Blutilities (aka Editor Utilities)

There are many pipelines throughout all of game dev that can only be described as a “manual process,” or a procedure that requires someone to go in and set things up by hand. Depending on how widespread that pipeline is used throughout the game, the amount of time and effort expended doing things by hand can exponentially increase to unreasonable amounts.

The obvious way around this problem is to automate those processes. However, getting any kind of automation integrated into your pipeline can be complicated and problematic. Luckily for us, Unreal Engine 4 has a method to create blueprint scripts specifically for the editor that can automate manual processes: Blutilities (or Editor Utilities if you are reading newer UE4 documentation). Through Blutilities, we can have a script that automatically goes through an entire level, look at certain objects on the level, and edit properties on those objects based upon a specific logic flow. These Blutilities can essentially take the same workflow and logical decisions that a person would make manually and have the editor run those actions and make those decisions itself:



The above is the Blutility logic to automate the process of adding ambient emitters to meshes throughout an area:

1) This logic goes through all the objects in a level and determines if they are meshes, and thus, potentially on the list to receive an emitter. This is equivalent to a person scrolling through the world outliner looking at all objects and determining if they possibly need an ambient emitter, which is a long process and is at risk for human error, as a person can miss objects while parsing through all the objects in an area.

2) This node determines if a mesh is a normal mesh or an instanced mesh, which then determines if it needs to follow the normal mesh logic flow (step 3, skipping 5 and 6) or the instance mesh logic flow (skip step 3 and 4, go to 5). The real world equivalent is someone looking at a mesh and looking at its object type to determine if it is a normal or instance emitter.

3) If the mesh is a normal mesh, this logic then determines if it is a mesh that needs a sound (and what sound that is). It is equivalent to a person looking at the mesh and seeing if it is of a type that needs a sound, which is prone to human error.

4) This logic goes ahead and creates the ambient emitter with the appropriate settings and moves it to the right location for this mesh. Manually doing this takes a lot of time and errors are easy to make as there are a lot of settings to set and a lot of setting combinations to remember.

5) If the mesh is an instanced mesh, this logic then determines if that instanced mesh needs a sound and if so, what that sound is, and where all the instances of the mesh are. It then creates ambient emitters at all the instance locations with the same sound settings. The manual process is the same; however, it can lead to many mistakes as it can be heard for a person to determine where all the instances of a mesh exist.

6) We name the emitter here programmatically based upon details like what sound is attached to the ambient emitter and so on. Normally, the name is entered in manually, which when dealing with typing strings, is ripe with mistakes like misspellings.

This script above can be run on every level, generating ambient emitters for a whole area without mistakes in a matter of minutes. Traditionally, this process would take hours or even days and it is more likely to have errors than not. Here is a small gif of what the tool can create:



[GIF:Q9_4_Emitter_Spawner.gif]

Being able to easily create and iterate on these Blutilities internally on the audio team allowed us to find the shortcomings in our pipelines and automate them, freeing up the team to work on other parts of the game.

We were able to work and improve pipelines without dependencies on other developers of the team, which became crucial when everyone on the team became more occupied with their responsibilities…

Overall, UE4 had wonderful features in it that allowed the audio team to be self-sufficient. Thanks to these UE4 features, we worked in our own levels that had our own scripts and emitters and created our own automation tools. We were able to work and improve pipelines without dependencies on other developers of the team, which became crucial when everyone on the team became more occupied with their responsibilities, as we could continue our work ourselves.





Sound Teams’ Favorite Bits

What is something you can’t wait for players to hear in this game?

DH: I’d have to say the Geothermal Plant is one of my favorite sounding areas from an environmental audio standpoint. There’s a lot of detail in there and a really cool big glass pipe with a stream of lava moving through it that I think sounds cool and makes the player feel like they’re in a different world. I hope people enjoy it. Also, the Gloop Gun! That thing sounds so great!

AM: The mix. I’m really happy with the level of fidelity each part reached, making the whole soundscape shine. It is the most gratifying game I’ve ever been a part of, and if you hear it, it’ll be easy to understand why.

RH: I love hearing players react to scripted moments. I’m excited for them to hear the times they divert energy somewhere or when they unlock specific doors.

JB: I hope folks appreciate all the little details the team put into this game across the board. Every last sound, line of dialog, and piece of music had tons of thought put into it. It’s crazy to think back on. The audio effort took about 110 development months to complete when you add up all the time everyone spent on it. That’s roughly nine years of time, condensed down into the space of two years.

I also deeply hope players enjoy the music. I wrote it for them, and a little piece of my soul gets taken and added to every score I write. Oh, and I think the corporate jingles turned out pretty good! Tony wrote a bunch of bossa nova muzak versions that we play during some elevator rides, and those are really great too.

…I think the corporate jingles turned out pretty good! Tony wrote a bunch of bossa nova muzak versions that we play during some elevator rides, and those are really great too.

SG: I enjoy when players use their companions’ abilities in combat. These were a lot of fun to design and I think they make for some cool, over-the-top moments where companions get a chance to show some of what makes them special and powerful.

ZS: Definitely the science weapons; they were some of the most unique and fun things I’ve ever had to design for. Each one had its own set of challenges but that’s what made them so rewarding.

JF: It was a last minute addition I managed to find a way to hook up, but nonetheless — some of the critters of the game can be talked to, like the Teacup Canids, the Chickens, and Bubbles. It is the cutest thing to chase them around and “talk” to them, as they respond with small little critter sounds. It really makes you wonder what they are thinking about…

What are you most proud of in terms of sound on The Outer Worlds?

AM: The finished product. The mixture of talent, craftsmanship, and love. We did this through hard work, collaboration, and sacrifice. The culmination of that is the sound of The Outer Worlds.

DH: Probably how well the game manages to immerse the player. I think our team really came together to form a soundscape where things sound unified and like everything belongs in the same universe. It’s very enjoyable to listen to and lets the player get lost in exploring our game!

JF: Personal work-wise, I am proud of the chatter system in the game. Everyone in the game has something to say, out of combat and in combat. Creating a system to hook all that up and then have that system present well was quite a labor of love!

From my perspective, I am proud of the spatialization effect that is throughout the game. For us on the audio team, it is the greatest instance of innovation we have done in my opinion. None of us were experts on how any of it worked, and all of us came together and collaborated to research it, get it wired up, get it sounding good, and then optimizing it. In my theater of mind, developing spatialization was the very pinnacle of assembling as a team of rag tag adventurers and braving the unknown together.

JB: I’m most proud of how everything came together. This project was unique and challenging in many ways. How often do you get to work on a new IP beside such talented people? It’s a rare thing, and it was very gratifying to be a part of. The team was so focused and gave it their all every day, and I think the end result really demonstrates that.

RH: For personal work, I’m most proud of inventory items. Those took a ton of recording and gametag organization to have designed and implemented in time.

For overall work that we achieved, I’m most proud of all the scripted moments we were able to get in. From large gas energy activation moments to feedback sounds for terminal selections, there are SO many sounds we put together and it’s amazing that we were able to fill up the game’s soundscape with these sounds.

SG: We all had different tasks and priorities but the ultimate goal of bringing The Outer Worlds to life with sound was shared by all of us. I’m extremely proud to see and hear every environment, weapon, creature, item, menu, cutscene, music cue, and voice-over come together to create a cohesive soundscape with its own unique identity.

TB: For me, it was the teamwork it took to get it all done. Everyone on the team worked at their best and created something that became part of a bigger whole. There is no one aspect of the audio that would work on its own were it not for the efforts of everyone else. Each person on the team was always open to feedback and there is a great sense of trust within the team. It’s a privilege to be a part of that kind of a work environment, where you can support each other and work collaboratively.

ZS: Similar to what Scott said, I’m proud of the final result! Hearing everyone’s work come together to create a living, breathing, and immersive world that reacts to the player is the most rewarding part of being a sound designer for games. The amount of content that went into this game was staggering and the dedication of everyone on the team was the only reason we succeeded.

A big thanks to the entire audio team for this hugely in-depth interview on the amazing sound of The Outer Worlds – and to Jennifer Walden for the interview!

Please share this:

FOLLOW OR SUBSCRIBE FOR THE LATEST IN FANTASTIC SOUND: