A team of researchers has revealed a potentially dangerous vulnerability in Amazon’s Alexa virtual assistant.

Skill-squatting, according to Ars Technica, is when a developer creates a similar sounding command to popular Alexa commands — called “skills” by Amazon — so that users who ask the assistant, for example, for “cat facts” may instead get the developer-created “cat fax,” which could turn out to be a malicious application.

“Developers are already giving their applications names that are similar to those of popular applications. Some of these — such as ‘Fish Facts’ (a skill that returns random facts about fish, the aquatic vertebrates) and ‘Phish Facts’ (a skill that returns facts about the Vermont-based jam band) — are accidental, but others such as ‘Cat Fax’ (which mimics ‘Cat Facts’) are obviously intentional,” Ars Technica reported. “Thanks to the way Alexa handles requests for new ‘skills’ — the cloud applications that register with Amazon — it’s possible to create malicious skills that are named with homophones for existing legitimate applications… This sort of thing offers all kinds of potential for malicious developers.”

To compare this “skill-squatting” of the Alexa system with normal Internet use, it is similar to mistyping a web address which takes the user to a fake version of the site that they attempted to visit, a fake site which may be malicious.

Ars Technica explained that malicious developers “could build skills that intercept requests for legitimate skills in order to drive user interactions that steal personal and financial information.”

Last year, it was also reported that Alexa devices could be controlled with voices outside the range of human hearing, and in May, a family’s Alexa device secretly recorded one of their conversations and sent it to a random person on their contacts list.

Amazon, which announced a version of Alexa for small children in April, has also partnered with Marriott hotels to put the virtual assistants in rooms.