#Google Webmaster Hangouts Notes – 2 October 2019













Welcome to MarketingSyrup! And first of all, I want to announce something I’m really excited about! This is the SEO Challenge for those who want to learn SEO. Check it out!

Now when you’re back, here are the notes from the recent Google Webmaster Hangouts session. I cover those regularly to save you time 🙂

Put links in the content to provide Google with context as opposed to having a separate block with links (1:24)



When Google finds links within the content of a page, it’s a lot easier for it to understand what this link is about as there’s context.



In comparison, if you have all the links in the footnotes (like a block of links put together without any additional text), it’s a lot harder for Google to understand what these links are about.



The main point here is to make sure you give Google as much context as possible when you link to other pages. So it’s better to have links in the text versus in a separate block listing only links.



The max-snippet robots tag is applicable to all normal search results (3:54)



The new robots meta tag – max-snippet – tells Google how long in characters a normal web snippet should be, and this applies regardless of where it’s shown.



This means that the max-snippet is applied to both normal search results and featured snippets. Note that if you have a low limit of characters to display (e.g. 1 character), your page might not be chosen for a featured snippet as there’s not enough text to display.



Also, max-snippet won’t be applied if you’re using structured data to trigger a rich result.



There are a few reasons why Google can rewrite your title tags (7:52)



The length of the title tag displayed on Google can change over time.



Moreover, Google can automatically rewrite your title tags. It usually happens in 2 situations:



Based on the query, Google tries to make the title tag more relevant by rewriting it. This happens algorithmically.

The title tags look spammy.

You don’t need to constantly change all the titles to make sure Google won’t rewrite or cut them off.



What you can do is to let your titles settle down, then check for which pages and queries your titles are rewritten and make changes to the titles in order to improve them.



There are no guidelines around the amount of text per page (15:35)



Google doesn’t have any word limit or guidelines in regards to how many words per page you should have to rank.

There’s no magic number, so don’t aim for the length, aim for the relevance.

Kristina’s note: I’ve recently had an interesting discussion on Twitter about that. And here’s the main point I made:

The point is: you can rank #1 with 300 words of good content or #101 with 3000 of bs content.

Now it's /theend 😀 — Kristina Azarenko (@azarchick) September 25, 2019

Content translated by a person is treated as unique content (29:30)



If you translate the content from one language to another, it’s a completely new version of the content, not a variation of the previous one.



The only tricky thing here is that transforming, for example, a UK version of the content to the US version is not a translation as both will be in English.

Kristina’s note: I asked John this question as I’ve seen a lot of confusion in the community. Many people asked whether they needed to noindex the translated content, add canonical tags to it or do something else.



The thing is that when you manually translate text from one language to another, you basically create something new. Even 2 different translators would create 2 new texts from the original (I’m a translator by education haha).



So if you translate a text from one language to another, don’t worry about duplicate or spun content. Just be nice and add a link back to the original.



Google treats different types of markup separately, so if one of them is invalid, others will still be shown (31:15)



Google has moved to a more granular approach. If you have multiple sets of structured data (e.g. article for posts, product for products, organization, etc.), and one of the structured data sets has errors or is spammy, Google will ignore it and display those that are valid.



It’s a little bit different though if you have multiple products with product markup, and only some of these products have invalid or spammy structured data. In this case, Google might ignore product markup for all of the product pages.



Google Discover feed is separate from normal organic results (43:28)



Google Discover is a completely organic feature. But how Google decides which content should be shown in the Discover feed is different from normal search results as there’s no query.

That’s it for today! Need more of these? Subscribe to the newsletter and get the notes delivered to your inbox!