Amazon has created and open-sourced a dataset used to train an AI model to identify names across language and script types so Alexa can, for example, understand the name of a Japanese artist or person when pronounced by an English speaker or vice versa.

Called a transliteration multilingual named-entity transliteration system, the tool for recognizing a name across different languages was based on an AI model created after Amazon made the dataset from Wikidata, content used to populate Wikipedia. In all, the dataset contains nearly 400,000 names in languages like Arabic, English, Hebrew, Japanese Katakana, and Russian.

The results of the study have been published on Arxiv and will be shared at the International Conference on Computational Linguistics being held later this month in Santa Fe, New Mexico.

The performance of Amazon’s AI in recognizing names varies greatly based on the language pairs. For example, English to Russian was easier to understand than English to Hebrew, because though they’re different, English and Russian have alphabets more like one another than English to Hebrew, according to an Amazon blog post.

Advances in Amazon’s language understanding are being touted at the same time as Amazon announced plans to bring Echo smart speakers to Mexico, the first Echo speakers for Latin America that speak Spanish. The Alexa Skills Kit and Alexa Voice Service for bringing Alexa into third-party devices in Mexico was also announced today.

As competition over smart speaker sales and AI assistant adoption heats up in international markets, it highlights the shortcomings of each AI assistant. While Alexa currently speaks six languages, Siri speaks more than 20, and Google stated earlier this year that it plans to make its Assistant available in more than 30 languages by the end of the year.

To improve Alexa’s understanding of new languages, last year Amazon engineers created and gamified Cleo, an Alexa skill made to gather voice samples from countries around the world.