Google today announced what it considers to be “one of the biggest leaps forward in the history of Search.” By applying a neural network-based technique known as BERT, the search engine will get better at understanding questions that are asked in a natural, everyday manner.

The company introduced the change by acknowledging how Search’s language understanding capabilities can still fail with complex or conversational queries.

In fact, that’s one of the reasons why people often use “keyword-ese,” typing strings of words that they think we’ll understand, but aren’t actually how they’d naturally ask a question.

To allow people to ask questions naturally, Google is applying a “neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers.”

Also known as BERT, this model analyzes words in relationship to the entire search term, rather than one word after another. Search will get better at understanding nuance, subtly, and finding out what’s really being asked by the user, essentially getting the full context. These improvements will surface more relevant links and Featured Snippets.

In quantifiable metrics, Google will be better able to understand one in ten questions with BERT. It’s currently only available in U.S. English but will expand to more countries and languages over time. Behind-the-scenes, this method is so complex that Cloud TPUs aimed at machine learning are being leveraged for search.

Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.

Google provided some examples terms that improved with BERT (on the right).

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news: