Two years ago, Sean Murray, a video-game developer from the town of Guildford, outside London, announced an ambitious game that he had been working on in secrecy with a small team: a fully explorable digital cosmos, called No Man’s Sky (which I wrote about in the magazine this week). When the digital universe is complete, it will contain more than eighteen quintillion planets; it would take billions of years, Murray has calculated, for one player to explore them all. To create a representation of space that is so vast, Murray’s studio, Hello Games, is using a technique called procedural generation: equations that draw upon random numbers to build naturalistic features, such as solar systems, planets, flora, and fauna.

When I visited Hello Games earlier this year, I expected to meet people working to figure out how the universe would look, and how it would function. One thing I hadn’t considered was the game’s audio. But, it turns out, No Man’s Sky poses a complicated question for its designers in this regard: How can you imbue this universe with naturalistic sound, from dinosaur grunts to whirring star-ship engines? The answer, initially, was unclear. Sound design in games, as in films, typically begins with the known; once Steven Spielberg decided that “Jurassic Park” would contain velociraptors and the Tyrannosaurus rex, his sound technicians knew they had to create plausible voicing for such creatures. But No Man’s Sky’s cosmos will be inhabited by countless imaginary organisms, based on random morphology, in environments shaped by chance. It will contain modular ships and architecture of unpredictable size and design. And everything in the game will be rendered only at the moment of encounter.

The job of solving the game’s sound problem, I learned, had fallen to Paul Weir, a composer and sound designer who specializes in the creation of audio for films, games, banks, and large stores, like Harrod’s. One morning at Hello Games, I climbed a narrow set of wooden stairs to the studio’s second floor, where members of the audio team were working in a tiny trapezoidal room. Weir was seated at a computer flanked by speakers on pedestals. He has a compact, wiry body, which he maneuvers with easy formality. (He was the only person in the studio wearing a tie.) He maintains an affiliation with the London office of Microsoft Studios, where he works as its audio director, and he is also well-versed in science fiction. Among other projects, he has worked on a BBC radio drama based on “The Hitchhiker’s Guide to the Galaxy.”

That morning, two programmers were with Weir, and a discussion was under way about how to fix unanticipated reverb that was appearing in the game. Like many parts of No Man’s Sky, Weir’s system was evolving through cycles of disassembly and reassembly. Working on his own for months, Weir had been making adjustments to game sounds divorced from their graphical embodiments, but that week he was trying to stitch key elements of his system back into No Man’s Sky’s master build. When I walked in, he was testing audio for a spaceship. The sound was multi-dimensional; layered with noises thrumming at alternate pitches, and rich in overtones, it seemed to be the byproduct of a genuine mechanism—combustion at the intensity of high-energy rocket thrust. “I love harmonic complexity,” Weir said. The source of the sound, which he had manipulated, was a hand dryer. “I always carry a recorder. For a lot of jobs, particularly for a game like this, I have this rule: the sounds have to be a hundred-per-cent original. We haven’t sourced anything from sound libraries, but a lot of games would.”

No Man’s Sky will make use of a wide range of atmospheric sound. Fly past a cluster of stars in its 3-D galactic map, and you will hear a shimmering noise that gives the universe a bejewelled quality. The game will also contain a soundtrack by the band 65daysofstatic, recorded in an old church. Some of the band’s music will be fragmented into micro-segments, which the game’s algorithms will recombine into ambient soundscapes uniquely tailored to what players see.

Weir told me that No Man’s Sky’s biggest audio challenge was the creatures. Vocalizations for the dinosaurs in “Jurassic Park” were amalgams of field recordings, too: distorted utterances of whales or terriers. (The Tyrannosaurus rex’s roar was from a baby elephant.) But Weir did not have the budget to make high-quality field recordings of exotic animals, nor did it make sense to do so, since there was no way to predict what individual animals in the game would look like, or do, or even what environment they would be in. Instead, he decided to bestow each creature with a unique digital vocal tract, to simulate sounds consistent with its appearance. Rather than working against the game’s algorithmic chaos, he would embrace it. Weir knew of no other game that did such a thing, and he thought that a few programmers would be required to complete the coding on schedule. Then he reached out to a Scottish programmer and game designer named Sandy White. (In 1983, White’s game Ant Attack was among the first to use 3-D graphics.) In two months, White wrote the necessary software.

White had flown down from Edinburgh for the week to work on the game, and he was seated next to Weir. “The whole issue is: how do you synthesize creature vocals without them sounding synthetic?” he said. “Because the danger is that they will sound like Stephen Hawking.”

Our brains are very adept at detecting patterns, and the reason synthetic voices typically sound artificial is that they are carried on sound waves that have a regular frequency: unvarying up-down-up-down modulations that are unmistakably inorganic. White suspected that if he built digital vocal chords (stimulated by columns of mathematically simulated air) the system would achieve naturalism. “The first results were a bit like the squeaker out of a dog toy,” he said, which wasn’t surprising: blow through the mouthpiece of a clarinet without the instrument, and the effect is similar. White then added a digital version of the pharynx, which sits behind the mouth and nasal cavity; it served as a resonator, amplifying sounds produced by the vocal chords, but also altering their texture. The squeaks became elongated. He called this the system’s “trumpety-chicken-duck-whale-car-horn phase.” By the end of January, several weeks after he had started programming, he added a digital mouth—the final component necessary for a rudimentary virtual vocal tract. Then he set about giving his creation a voice.

Every vowel is defined by a narrow band of frequencies, known as a formant, which are created by the vocal tract as a whole—the way sound resonates throughout all its parts. White found a paper from 1962, titled “A Study of Formants of the Pure Vowels of British English.” The paper, based on recordings of twenty-five male subjects, contained a table of the relevant data. Late one night, alone in his Edinburgh studio, he copied the values for a vowel labeled “/a/ hard” and plugged them into his system. The digital resonance that White had created—with vocal chords, pharynx, and mouth all affecting each other—caused the utterance to take on human character, and the result was a blood-curdling scream. The voice broke, twisted, and grew hoarse during moments of high intensity. White gave me an MP3 of it, and I later played it for two people without telling them what it was. Both thought it came from an animal; one wondered if it was a person being tortured, and the other wondered if it was a goat. White recalled, “Two o’clock in the morning, headsets in, and the thing went ‘Aaaaahhhhh.’ I was sweating because it was so scary. But I was also like, This is working!”

From there, White set about making the system adaptable, so that it could be used for animals of myriad species and sizes. The week before I visited Guildford, he had designed a tool—using an off-the-shelf app on an iPad—to allow Weir to implement his software to generate the creature sounds, which would then be incorporated as code into the game. Sitting in his office, Weir held the iPad in his hand. The device had a hybrid user interface: part laptop trackpad, part midi board—the kind you might see in a sound studio, with many sliding levers—and part theremin. “A player would never see this,” he said. Weir explained that he would use the iPad to perform vocalizations only for creature archetypes; then a set of algorithms would mutate each performance, to adapt it to the countless variations of that creature in the game. The process is analogous to the way the creatures are designed graphically: using tools like Photoshop, artists at Hello Games create archetypes that algorithms then transform.

Raising the iPad, Weir said, “It feels like an instrument.” He offered to play it. Drawing his finger across the screen, he nudged the lever bars to indicate attributes like body mass, aggressiveness, windpipe length, wetness, screechiness, harshness. (The software makes sounds based on roughly a hundred different parameters.) Then, while moving his thumbs across two graphical boxes on the iPad—one labelled “vowel map,” the other “pitch”—and simultaneously twisting the device in space, he generated a vocalization. The iPad’s physical movement determined the energy behind the utterance: the arc of the motion shaping the sound’s arc.