It used to be the case when I traveled abroad that I would take a little pocket dictionary that provided translations for commonly used phrases and words. If I wanted to construct a sentence, I would thumb through the dictionary for five minutes to develop a clunky expression with unconjugated verbs and my best approximation of the correct noun. Today I take out my phone and type the phrase into Google Translate, which returns a translation as fast as my Internet connection can provide it, in any of 90 languages.

Machine translation is leaps and bounds faster and more effective than my old dictionary method, but it still falls short in accuracy, functionality and delivery. That won’t be the case for long. A decade from now, I would predict, everyone reading this article will be able to converse in dozens of foreign languages, eliminating the very concept of a language barrier.

Today’s translation tools were developed by computing more than a billion translations a day for over 200 million people. With the exponential growth in data, that number of translations will soon be made in an afternoon, then in an hour. The machines will grow exponentially more accurate and be able to parse the smallest detail. Whenever the machine translations get it wrong, users can flag the error—and that data, too, will be incorporated into future attempts.It is just a matter of more data, more computing power and better software. These will come with the passage of time and will fill in the communication gaps in areas including pronunciation and interpreting a spoken response.

The most interesting innovations will come with the hardware development for the human interface. In 10 years, a small earpiece will whisper what is being said to you in your native language nearly simultaneously as a foreign language is being spoken. The lag time will be the speed of sound.

Nor will the voice in your ear be a computer voice, a la Siri. Because of advances in bioacoustic engineering measuring the frequency, wavelength, sound intensity and other properties of the voice, the software in the cloud connected to the earpiece in your ear will re-create the voice of the speaker, but speaking your native language. When you respond, your language will be translated into the language of your counterpart, either through his or her own earpiece or amplified by a speaker on your phone, watch or whatever the personal device of 2025 is.