Pitch could be measured in hertz and loudness in decibels, but other phenomena were not so easily quantified. Human hearing can discern the movement of sound with a surprising degree of accuracy. It can distinguish timbre, the difference between a clarinet and a saxophone. It can remember patterns of speech, to immediately identify a friend in a phone call years after last hearing the voice. And a parent can effortlessly sift the sound of an infant’s cry from the blare of a televised football game.

Finally there were the imponderables, things we do with our hearing simply because we can. “Everyone knows the sound of a bowling ball as it rolls down the alley,” said William M. Hartmann, a Michigan State University physicist and former president of the Acoustical Society of America. “What is it about that sound that we can identify?”

For much of the 20th century, engineers devoted themselves to developing acoustical hardware like amplifiers, speakers and recording systems. After World War II, scientists learned how to use mathematical formulas to “subtract” unwanted noise from sound signals. Then they learned how to make sound signals without any unwanted noise.

Next came stereo. By recording two tracks, engineers could localize sound for the listener. “Simple enough,” said Alan Kraemer, chief technological officer for SRS Labs, an audio company in Santa Ana, Calif. “If something’s louder on one side, you’ll hear it on that side.”

But stereo had no real psychoacoustics. It created an artificial sense of space with a second track, but did so by dealing with only one variable — loudness — and enhanced human perception simply by suggesting that listeners separate their speakers.

The digital age changed all this, allowing engineers to manipulate sound in ways that had never been tried before. They could create sounds that had never existed, eliminate sounds they did not want and use constant changes in filter combinations to deliver sound to listeners with a fidelity that had never before been possible.