Ever hear (no pun intended) of audio watermarking? It’s the process of adding distinctive sound patterns identifiable to PCs, and it’s a major way web video hosts, set-top boxes, and media players spot copyrighted tracks. But watermarking schemes aren’t particularly reliable in noisy environments, like when the audio in question is broadcasted over a loudspeaker. The resulting noise and interference — referred to in academic literature as the “second-screen” problem — severely distorts watermarks, and introduces delays that detectors often struggle to reconcile.

Researchers at Amazon, though, believe they’ve pioneered a novel workaround, which they describe in a paper newly published on the preprint server Arxiv (“Audio Watermarking over the Air with Modulated Self-Correlation“) and an accompanying blog post. The team claims their method — which they’ll detail at the International Conference on Acoustics, Speech, and Signal Processing in May — can detect watermarks added to about two seconds of audio with “almost perfect accuracy,” even when the distance between the speaker and detector is greater than 20 feet.

Better still? Unlike traditional acoustic fingerprinting methods, which require storing a separate fingerprint for each instance and have a computational complexity that’s proportional to the fingerprint database, the researchers’ approach has a constant complexity, which they say makes it ideally suited for low-power devices like Bluetooth headsets.

“Our algorithm could complement the acoustic-fingerprinting technology that currently prevents Alexa from erroneously waking when she hears media mentions of her name,” wrote Yuan-yen Tai, a research scientist in Amazon’s Alexa Speech group and coauthor of the paper. “We also envision that audio watermarking could improve the performance of Alexa’s automatic-speech-recognition system. Audio content that Alexa plays — music, audiobooks, podcasts, radio broadcasts, movies — could be watermarked on the fly, so that Alexa-enabled devices can better gauge room reverberation and filter out echoes.”

So how’s it work? As Tai explains, the model employs a “spread-spectrum” technique in which watermark energy is spread across time and frequency, rendering it inaudible to human ears while robustifying it against postprocessing (like compression). And it generates watermarks from noise blocks of a fixed duration, each of which introduces its own distinct pattern to selected frequency components in the host audio signal.

Conventional detectors would compare the resulting sequence of noise blocks — the decoding key — with a reference copy. But Tai and colleagues take a different approach: Their algorithm embeds the noise pattern in the audio signal multiple times and compares it to itself. Because said signal passes through the same acoustic environment, Tai explains, instances of the pattern are distorted in similar ways, enabling them to be compared directly.

“The detector takes advantage of the distortion due to the acoustic channel, rather than combatting it,” he added.

It’s not a perfect solution — it necessitates shorter noise patterns, which correlate to lower detection accuracy, and when the target audio includes music, the rhythms sometimes too closely mimic the repeating noise pattern. But the team says both of these can be largely mitigated with repetitions of the noise block pattern — they randomly invert some of the blocks, decreasing the amplitude of the block where it would normally increase and vice versa.

The decoding key, then, becomes a sequence of binary values instead of noise blocks (a sequence of floating-point values), indicating whether a given noise block is inverted or not. (They’re flipped at the detector stage, at which point they’re compared with the noise block patterns.) In experiments, the team says their algorithm’s performance yielded almost 100 percent detection accuracy with watermarks 1.6 seconds in length.