Rumors are swirling that Apple will buy music-identifying app Shazam, potentially spending almost $400M to do so. This acquisition will have little to do with music, and everything to do with sound.

Edit: No longer a rumor; Apple did indeed spend upwards of $400M and acquired Shazam.

With the release of ARKit, recent acquisitions in the AR space, and Tim Cook declaring AR will “change everything, ” it is no secret that Apple is working on bringing augmented reality to consumers. More than just building headsets and displaying holograms for augmented reality — Apple’s strategy is much larger and involves augmenting your reality by understanding where you are in the world, what is happening around you, and how you are interacting with it. This is also known as Context Aware Computing, and is what Tim Cook was really referencing when he said “the reason I’m so excited about AR is I view that it amplifies human performance.” When done right, context aware computing will understand the world and give people a heightened sense of understanding, equivalent to a Spidey Sense. This is why buying Shazam makes sense — it is part of Apple’s AR strategy.

Shazam has developed technology to recognize and identify music, but potentially even more impressive is the ability to work in noisy areas and across many different devices and microphones. As part of an AR experience, Apple could extend Shazam’s capability beyond just recognizing music, and teach it to identify all sorts of sounds around a user. That information could then be used to develop context of the situation or space. This means your Airpods could listen for the sounds of the ocean to tell if you are at the beach and remind you to put on sunscreen then automatically put on your summer playlist, or listen to the weird sound your fridge started making to instantly identify what went wrong, or warn you to move if you have walked out into the middle of the street while staring at your phone and a car is about to hit you.

Buying Shazam will give Apple incredible software capabilities to understand context and augment a user’s reality. The next thing to do is pair that software with phenomenal hardware. In this case, that means really good microphone arrays and acoustic components. Recognizing that microphones won’t just go in your headphones, but in everything that will have a voice interface, companies of all sizes are driving innovation in the acoustic components space, hoping to supply the picks and axes for the coming voice interface rush. In a follow up post, I will dive into this growing landscape.

Neil Gupta

Venture Partner, Indicator Ventures