Google recently published a new patent application that was originally filed by Google back on August 10, 2018. The patent is named Website Representation Vector to Generate Search Results and Classify Website. Bill Slawski was the first to cover this new patent and did an outstanding job dumbing it down for us.

The abstract. The abstract, although technical, explains how Google can classify a website into multiple classifications. Here is the abstract:

“Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using website representations to generate, store, or both, search results. One of the methods includes receiving data representing each website in a first plurality of websites associated with a first knowledge domain of a plurality of knowledge domains and having a first classification; receiving data representing each website in a second plurality of websites associated with the first knowledge domain and having a second classification; generating a first composite-representation of the first plurality of websites; generating a second composite-representation of the second plurality of websites; receiving a representation of a third website; determining a first difference measure between the first composite-representation and the representation; determining a second difference measure between the second composite-representation and the representation; and based on the first difference measure and the second difference measure, classifying the third website.”

What does this mean? Well, Slawski says, “The patent application uses Neural Networks to understand patterns and features behind websites to classify those sites. This website classification system refers to “a composite-representation, e.g., vector, for a website classification within a particular knowledge domain.” Those knowledge domains can be topics such as health, finance, and others. Sites classified in specific knowledge domains can have an advantage in using that classification to return search results as they respond to receiving a search query.”

In short, it seems Google can determine the category of the site and thus understand if the site needs to have a level of authority to it. Is the site written by experts and does it have an authority? The patent reads, “For instance, the website classifications may include a first category of websites authored by experts in the knowledge domain, e.g., doctors, a second category of websites authored by apprentices in the knowledge domain, e.g., medical students, and a third category of websites authored by laypersons in the knowledge domain.”

E-A-T. Yes, this rings in the concept of E, expertise, A, authoritativeness and T, trustworthiness that we get from Google’s search quality raters guidelines. But this is a patent application that technically can be used to create ranking signals and algorithms.

Timing of this. The timing of the release of this patent application is interesting as well. It was published in August 2018, that is when the August 1st core update was released, which I nicknamed the Medic Update based on it seeming, to me, to be highly slanted towards health and medical sites. There are over a dozen references in this patent application to health, medical, doctors and related terms.

More. Remember a year ago we covered a white paper released by Google that said, Google may use different ranking weights for YMYL-type queries? This patent aligns with how Google may possibly do this. If Google can classify different sites as being part of the YMYL – your money your life – areas, then it can say expertise, authority and trustworthiness are more important. And this patent application describes it.

Why we care. SEOs love to debate and this recently published patent may shed more light into the theories around the August 1st update, if Google can measure E-A-T or classify YMYL sites. But keep in mind, Google has said numerous times, just because it holds a patent, it doesn’t mean it is in use.