But in the mid-1990s, researchers began favoring a so-called statistical approach. They found that if they fed the computer thousands or millions of passages and their human-generated translations, it could learn to make accurate guesses about how to translate new texts.

It turns out that this technique, which requires huge amounts of data and lots of computing horsepower, is right up Google’s alley.

“Our infrastructure is very well-suited to this,” Vic Gundotra, a vice president for engineering at Google, said. “We can take approaches that others can’t even dream of.”

Automated translation systems are far from perfect, and even Google’s will not put human translators out of a job anytime soon. Experts say it is exceedingly difficult for a computer to break a sentence into parts, then translate and reassemble them.

But Google’s service is good enough to convey the essence of a news article, and it has become a quick source for translations for millions of people. “If you need a rough-and-ready translation, it’s the place to go,” said Philip Resnik, a machine translation expert and associate professor of linguistics at the University of Maryland, College Park.

Like its rivals in the field, most notably Microsoft and I.B.M., Google has fed its translation engine with transcripts of United Nations proceedings, which are translated by humans into six languages, and those of the European Parliament, which are translated into 23. This raw material is used to train systems for the most common languages.

But Google has scoured the text of the Web, as well as data from its book scanning project and other sources, to move beyond those languages. For more obscure languages, it has released a “tool kit” that helps users with translations and then adds those texts to its database.