The Google algorithm uses a mix of both direct factors (things that the Googlebot spider can crawl such as sub-headlines, word counter, images) and indirect factors (such as click-through-rate) to determine how well a page should rank within the search engine.

There is an abundance of evidence that suggests that the latest March 2019 core algorithm update has increased the weight of indirect factors that fall within the category of 'user experience'.

In layman's terms, this means that Google is putting more emphasis on how humans react to content rather than the absolute data returned by the Google spiderbot.

I believe this is a positive change as it allows for more creative freedom and doesn't pigeon hole webmasters into following strict templates in order to rank on Google. The idea is that if you have appealing title, a well crafted page and you deliver the information promised, you’ll likely have a high ranking website.

While I would love to tell users "Do what you want and as long as you make good content you'll rank", we're not quite there yet. In fact, we seem to be a at point where:

You must deliver content that follows Google's best practices AND provide a good user experience in order to rank.

Let's break down the "user experience" part a little...

While Google has access to an abundance of data (seriously, they know where I am currently typing this from) they have been secretive as to which parts of data they are using for search.

1. One measure we know for sure is click through rate. That is, how often users click on your website per impression on Google.

For example, if your website shows up as the #1 result on Google but most users click on the #2 result instead (perhaps because the title or meta-description is more appealing), you are likely to suffer and quickly drop from the first result. Brand recognition can play a role here as users are more likely to click on a site that they like and recognize.

2. Another metric that we can be confident is used by Google is the "bounce back to Google". That is, how many users click on the "back" button on their browser to return to Google after they clicked on a search result.

For example, when user A clicks on the #1 result on Google's search, he is sent to a certain webpage. If user A only stays on the website for a few seconds and clicks the back button on his browser to return to the Google search engine listings, this "bounce back to Google" signals that the user did not find what he was looking for on the website and is returning to Google to try another result.

If the user A then clicks on the #2 search result, finds the information he was seeking and closes his browser. Google can assume that the informational query was satisfied by that webpage.

This is likely complicated by the fact that modern web users open up multiple tabs, perform multiple searches for similar keywords and have diverse objectives.

While Google won't tell us exactly how they measure user experience (likely because of privacy reasons and spam reasons), we can safely assume that there are more metrics and data accumulated that contribute to the overall quality score of a website.

Chrome warns users that it collects anonymous browsing data, Google Page Speed insights presents comparative user load time for different websites just to name a few...

The bottom line is that if you're producing enough volume for Google to accurately measure user behavior and that data tells Google that users are NOT enjoying your website, it is likely to assign the entire site a lower score.