Amazon Quickly Closes Off Potential Alexa Security Issue

A researcher at security software company Checkmarx has a demonstration of a hack that enabled the developer to transcribe comments made in the presence of an Amazon Echo Dot. The hack involved keeping the microphone open for 16 seconds after a user stopped interacting with a custom Alexa skill. During that timespan, utterances captured by the Echo microphone were recorded as “slot” values that the developer could then view as transcribed text. The audio of the spoken comments were not available as Alexa does not share that data with developers. Only the speech-to-text word transcription was available for the time immediately after the skill was used. A blog post about the hack said there were two challenges the security researcher had to overcome:

They had to ensure the Alexa recording session would stay alive after the user received a silent response from the device. They wanted the listening device to accurately transcribe the voice received by the skill.

A YouTube video demonstrates the hack in action. The researcher first accesses first-party information to demonstrate that the device is actively connected to the Alexa voice service. He then opens what is presumably a third-party custom skill and asks it to perform a simple math function. After he completes that he continues talking and his utterances are transcribed as “slot” values in his Alexa skill logs. They provide a very good facsimile of what the developer said during the time the microphone remained open.

Vulnerability is Closed, but Was it Really a Risk?

The researcher provided the findings to Amazon before disclosing it publicly. Amazon’s Lab126 worked with Checkmarx and ultimately implemented three changes according the blog post:

Setting specific criteria to identify (and reject if necessary) eavesdropping skills during certification Detecting empty-reprompts and taking appropriate actions Detecting longer-than-usual sessions and taking appropriate actions

On the first point, Amazon now has specific criteria to address the potential for this type of issue, but its existing procedures likely would have caught it anyway. Michael Myers, chief product officer at XAPPmedia, commented:

“This is a good party trick, but there was little risk for users. First you need to get someone to use your skill. And, having a popular skill is likely to be far more lucrative than hoping to catch someone’s conversation. If no one says anything after you get your answer, then Alexa just shuts off. You have just a few seconds to capture information. Besides, the implementation would not have passed certification.”

This is a good party trick, but there was little risk for users.

Myers appears to be right. Sources with knowledge of the situation told Voicebot that no Alexa skills with this type of implementation were ever certified or accessed by users. The video demonstration is the developer using Alexa in a test environment. And, there is another important point. The hacker did not disable the light ring which is a visual indicator to users that the microphone is active and Alexa is listening for an interaction. That did not prevent recording, but it is a safeguard that was already in place.

The other safeguards listed in points 2 and 3 can be addressed programmatically to ensure this type of hack doesn’t occur in the future and Amazon can scan existing skills to determine if these approaches are added to existing skills. Amazon’s official statement related to the matter tacitly confirms the Checkmarx conclusion by saying:

Customer trust is important to us and we take security and privacy seriously. We have put mitigations in place for detecting this type of skill behavior and reject or suppress those skills when we do.

The Bottom Line

The Checkmarx exploit is the first reported software hack of an Alexa device. There was a reported exploit in 2017 by MWR Labs, but that involved physically accessing an Echo and modifying the equipment. It wasn’t exactly a scalable attack vector. Another exploit called the DolphinAttack, also emerged in 2017 after researchers in China demonstrated how voice commands transmitted in ultrasonic frequencies could activate and exercise limited control over voice assistants such as Siri, Alexa and Google Assistant. Again, this attack requires near proximity to an Alexa-enabled device to execute the exploit. Checkmarx appears to be the first to successfully implement malicious code or malicious behavior into a voice app like an Alexa skill that enables hackers to execute an exploit from anywhere.

John Kelvie, CEO and founder of voice app testing and monitoring software company Bespoken, also points out that a user would have to know about the skill, open the skill, stop using the skill and then say something of interest to the hacker in the ensuing 16 seconds. This is not likely to yield much useful or voluminous information. Kelvie commented in an email interview with Voicebot:

“The key thing is you have to have launched the skill, and you only get access to what is said immediately after the user ceases to use it. This is not likely to yield particularly interesting data, especially considering that without account linking, the user is anonymous. I do tip my hat to the person that wrote it up. They will benefit from the public’s interest and concerns around these listening devices, which though I believe is overblown, is a nice way to get publicity.”

The bottom line is that this is not a particular concern for users today, there is no evidence it was ever exploited and Amazon appears to have closed off the vulnerability. Expect to see more attempts at hacking Amazon Alexa, Google Assistant and Apple Siri in the coming years. It is a great story that will drive media interest even if the purported exploits don’t amount to much. Also, it is worth calling out Checkmarx’s contribution to the space by finding a potential vulnerability and working with Amazon to patch it before any harm came to users or the company. It is great to see collaboration on the security front between researchers and the voice platforms.

Follow @bretkinsella

3