Security researchers with SRLabs have disclosed a new vulnerability affecting both Google and Amazon smart speakers that could allow hackers to eavesdrop on or even phish unsuspecting users. By uploading a malicious piece of software disguised as an innocuous Alexa skill or Google action, the researchers showed how you can get the smart speakers to silently record users, or even ask them for the password to their Google account.

The vulnerability is a good reminder to keep a close eye on the third-party software that you use with your voice assistants, and to delete any that you’re unlikely to use again where possible. There’s no evidence that this vulnerability has been exploited in the real world, however, and SRLabs disclosed their findings to both Amazon and Google before making them public.

In a series of videos, the team at SRLabs has shown off how the hacks work. One, an action for Google Home, allows the user to ask for a random number to be generated. The action does exactly this, but the software then continues listening long after performing its initial command. Another, a seemingly innocuous horoscope skill for Alexa, manages to ignore a ‘stop’ command given by the user and to continue silently listening. Two more videos show how both speakers can be manipulated into giving fake error messages, only to pipe up a minute later with another fake message to ask for the user’s password.

In all cases, the team was able to exploit a flaw in both voice assistants which allowed them to keep listening for much longer than usual. They did this by feeding the assistants a series of characters which they can’t pronounce, which means that they don’t say anything, and yet continue to listen for further commands. Anything the user says is then automatically transcribed and sent directly to the hacker.

Third-party software for either smart speaker has to be vetted and approved by Google or Amazon before it can be used with their smart speakers. However, ZDNet notes that the companies don’t check updates to existing apps, which allowed the researchers to sneak malicious code into their software that’s then accessible to users.

In a statement provided to Ars Technica, Amazon said it has put new mitigations in place to prevent and detect skills from being able to do this kind of thing in the future. It said that it takes down skills whenever this kind of behavior is identified. Google also told Ars that it has review processes to detect this kind of behavior, and has removed the actions created by the security researchers. A spokesperson also confirmed to the publication that the company is conducting an internal review of all third-party actions, and has temporarily disabled some actions while this is taking place.

As ZDNet notes, this isn’t the first time we’ve seen Alexa or Google Home devices turned into phishing and eavesdropping tools by security researchers, but it’s worrying that new vulnerabilities continue to be discovered, especially as the security and privacy aspects of both devices are coming under increased scrutiny. For now, it’s best to treat third-party voice assistant software with the same caution that you should use with browser extensions, and only engage with software from companies that you trust enough to let into your home.