#Google Webmaster Hangouts Notes – 10 May 2019













Welcome to MarketingSyrup! This post is part of my Google Webmaster Hangouts Notes. I cover them regularly to save you time.

Here are the notes from May 10th. This is Part 2, you can find the 1st part here. The timestamps of the answers are in the brackets.

How to and FAQ reports are coming to GSC (3:00)

But you need to have the markup to see these reports in your Google Search Console.

There are new features coming to Google Search Console (3:30)

The opt-in for large images

if you have them on your website and want to be shown in the search, you’ll be able to use this future.

Duplex on the web settings

This is the way to streamline your checkout flow so that people will be able to buy something using Google Assistant. The setting there is mostly a test account: you’ll be able to specify username and password for that test account, and the machine learning system will use it learn your checkout flow.

Evergreen Googlebot is live but some tools still need to be updated (7:16)

Google has recently announced evergreen Googlebot which will be better at rendering content, especially JS based.

This new Googlebot is 100% live (though it’s still using the old name – Chrome 41 in the logs). But testing tools haven’t been updated yet, e.g. mobile-friendly test, URL inspection tool, structured data testing tool.

Improve your traffic and revenue from SEO within 60 days!

There shouldn’t be any fluctuations in rankings due to the switch to new Googlebot (8:05)

Those websites that Google could index before shouldn’t see any changes. The update is aimed at the websites using modern features and content which could have been missed by the previous Googlebot.

Googlebot doesn’t use HTTP/2 for crawling and indexing (9:20)

HTTP/2 makes a lot of sense for browsers when you have multiple streams of content that need to be rendered. But it’s not really needed for Google as it caches a lot of the content and uses it when needed.

If you’re using JS, make sure to also provide static content if you need your pages to be indexed quickly (10:50)

When it comes to indexing of JavaScript, Google first picks up the HTML content and can index it right away. But rendering takes a little bit longer (up to a few days). That’s why websites that need their content rendered as quickly as possible (for example, news publishers), should have some kind of static content. This guarantees that its indexing won’t be delayed by rendering.

It’s better to have a single page for your product instead of many pages which are variations of the same item (12:40)

Make your product pages stand on their own: they should be unique products and not variations of the same item. So if you have a product in different sizes and/or colors, it might make sense to have a single page for it.

You can use ‘noindex’ to handle duplicate/thin content (16:43)

Noindexing a page is a good way to handle duplicate or thin content. It particularly works for situations when you still want users to be able to access the page but don’t want Google to access it.

Google recognizes and ignores spammy websites copying content from legitimate websites or linking to them (18:13)

There are many situations when spammers copy content from high-quality websites, span it, link to those websites. Google is pretty good at recognizing this type of spam. It ignores it which means that the website whose content has been copied or which were linked from spammy websites, should not worry about this.

Crawl budget is not about crawl depth but about the volume of requests Google was able to get (29:25)

For Google, crawl budget means how many URLs it would fetch from a website on a day, Crawl budget might be an issue for really large websites while small and medium ones are safe here.

Usually, the hard part with crawl budget is not finding the limit but balancing between indexing the new content and updating index of the existing content.

Reducing the size of your page won’t increase crawl budget (31:27)

What would help though is having a quick server response time. Otherwise, Google will slow down indexing and will get to fewer pages than it potentially could.

3rd party resources don’t influence your crawl budget of a website (32:49)

Google looks at the content on a server level. So if you have content on a CDN, for example, its indexing would apply to their crawl budget.