To do this, speech recognition technology processes spoken words by separating them into individual soundbites. Using machine learning, it then cross references your pronunciation with the pronunciation it expects. For example, if you’re practicing how to say “asterisk,” the speech recognition technology analyzes how you said the word and then, it recognizes that the last soundbite was pronounced “rict” instead of “uhsk.” Based on this, you will receive feedback on how you can improve next time.

Visuals help explain a word’s meaning

Visuals are a helpful way to explain what a word means or even improve the retention rate.

Starting rolling out today, when you look up the translation of a word or its definition, you’ll start seeing images that give you additional context. This can be useful with words that have multiple meanings like “seal,” or words like “avocado” that aren’t commonly used in all languages or regions. Since not all words are easily described with an image, we’re starting with nouns and plan to expand from there. Images in the dictionary features will be available in English today and across all language translations.



