Technical optimizations lay down the fundamental foundations of your overall SEO strategy for better visibility and better ranking. Proper technical optimization will help optimize your crawl budget and ensure Google’s ability to detect and index important content. It will also provide a good user experience for your website visitors, which is also very important for search engines.

In the following paragraphs, I’ve collected and selected 35 of the most valuable and important answers from John Mueller, Senior Webmaster Trends Analyst at Google, to technical questions asked in the regular Webmaster Central Hangout Livestreams. These are intended to help you with any technical mistakes or to confirm your doubts / suspicions about best practices for technical optimizations.

1.404 errors do not affect the ranking of your other pages

At one point, it was rumored that 404 errors had a huge impact on the ranking of pages throughout the site and were a sign of a negative reputation in general. John confirmed that this is not the case.

In one of my recent conversations with John, he even said that the presence of these pages was completely normal and their number did not affect negatively the website rankings.



2. Google assumes two domains are the same if they use the same URL parameters and shared hosting

Try to use different parameters on your URLs if you have domains with shared hosting sites. Only then Google will treat them differently and differentiate between them.



3. The final URL is unlikely to be indexed in a 301 redirect

To ensure indexing for pages that are targets of a 301 redirect, make sure you use a canonical tag, internal links, and a hreflang tag (if necessary) to the correct page, rather than the page that has been redirected. Otherwise, you communicate wrong signals to Google, and this can significantly confuse the search engine and prevent it from indexing the right pages.



4. Google uses load time and render time to assess page speed

In John Mueller’s words, TTFB (time-to-first-byte) optimization is not an indicator of good user experience. For this reason, Google views the overall picture of loading a site rather than separate metrics for intermediate stages. The loading and rendering time of the page are particularly relevant factors for search engines. This does not take into account the time it takes to crawl a page.



5. Google does not use CSS images

When using CSS images, such as backgrounds, note that Google does not take them into account. In order for your image to be shown in Image Search it must be in a tag with an src=”https://www.example.com/my_image_URL” attribute.



6. It does not matter where your site’s internal links are positioned

John once again confirmed that the availability of internal links and their relevance to the user are much more important than where they are positioned on the page. It is very important that you create efficient website navigation: the GoogleBot needs it to properly crawl your site.

7. Be careful when changing your hosting provider – Google temporarily reduces crawl rate

Sometimes we have to change our hosting provider for a number of reasons – poor support, slow servers, lack of service or discontinued service. When changing providers, keep in mind that Google will reduce the crawl rate of your site temporarily because it does not know what load the new server can withstand.



8. If you are using Javascript Frameworks, test them with the Rich Snippets feature in Google Search Console

Javascript Frameworks are a very trendy way to build your website in an interactive and stylish way. However, they are quite often the cause many SEO issues, especially when it comes to crawling and indexing. Because of this, John advises us to use the GSC Rich Snippets feature to make sure that a website is properly crawled and rendered.



9. HTTPS does not directly affect ranking, but can be used as a tiebreaker

If you are ranked in results pages where websites all have equal status for an analysis of the ranking factors covered, Google may use HTTPS to lead you to the top if, for example, the other sites do not have an SSL certificate installed.



10. Structured Data is useful and important for your site, but it also requires to clean HTML with no mistakes

Be careful about validating the HTML code of your site. Eliminate your errors as much as possible because your Schema.org implementation may be “broken” and therefore not validated in Google’s Structured Data Testing Tool.



11. The quality of a site is viewed by Google on the basis of indexed pages only

Quite often, questions are asked to Google about how the pages that are prevented from being indexed or crawled can affect the overall quality and evaluation of your site. John has confirmed that Google assess only pages indexed by the search engine to determine the quality of content on the pages of your site.



12. A status of 410 deletes the page from the index faster than a 404 status

In the medium- to long-term time periods, 404 is almost the same as 410, as in both cases pages will be removed from the index. Keep in mind that a page with 410 status may be deleted more quickly from the index, usually within days.



13. Adding noindex via Javascript is not recommended

If you enter noindex in Javascript, its consideration and application will be delayed because Google will capture it in its second rendering and indexing: Google’s Javascript crawl may occur at a later date and the page may be indexed in the interim. In order to disallow the indexing of a page for sure from the outset, use static HTML to prevent indexing.



14. Javascript and HTML should not send mixed signals to Google

Be careful of the signals you send to Google in Javascript and HTML. If you choose to set a link to be nofollow in Javascript and follow in HTML, then this link will be followed from the first indexing of the page, because Google captures Javascript signals only during the second wave of rendering and indexing. The case is similar if you choose to disable a page from being indexed or the links in it from bearing any weight.

That’s the same with duplicate signals too – do not give the same signals in HTML and Javascript. For example – if you use Javascript to modify canonical tags or robots meta tags, do not duplicate these tags in HTML.



15. If you change the content of a page more often, Google will also crawl it at a higher frequency

Google attempts to capture how often a page’s content changes, and modifies the GoogleBot crawl frequency accordingly. If this page changes its content very often and regularly, Google will also increase its crawl rate.



16. Google does not index content from URLs that contain hash

Be careful with the exact URL location of your content. If it appears when there is a hash in a URL, such as http://www.example.com/office.html#mycontent, it will not be indexed by Google.



17. Errors of type 4xx do not result in a loss of your crawl budget

When you see GoogleBot to crawl pages of this type, it does not necessarily mean that your budget for crawling the entire site is wasted. Google re-crawls these pages to make sure there is nothing to index, and gives positive signals to crawl more pages.



18. If you want to index your site quickly, then a sitemap and crawl rate are the key to achieving this goal.

Each server has a different capacity. If you are familiar with the technical features or have had contact with your hosting provider’s support and know that the server has more capacity, you can contact Google’s webmaster’s help center and ask for a higher crawl rate.

Do not forget to send a sitemap with new URLS and last modified date to the Google Search Console. This will help Google to crawl and index your pages as quickly as possible.



19. Crawl frequency has nothing to do with ranking

Quite often, speculations are made with the phrase “crawl rate,” and many people began to associate it with a higher ranking. According to John’s words, a high crawl rate does not mean a high ranking and vice versa. Crawling and ranking are different processes.



20. Shorter URLs are not given higher priority by Google

While it is claimed that shorter URLs are preferred, Google does not treat them preferentially. To a large extent, this plays a major role in USER experience rather than how Google treats addresses.



21. Googlebot successfully recognizes and treats positively different navigation for desktop and mobile (responsive)

In case you decide to create one navigation for your site’s desktop version and one for mobile responsive coded in HTML, this will not cause any issue with Google.



22. Google successfully differentiates similar domains, even with similar link profiles

If we have two similar domains, Google will be able to differentiate them even if they have very similar link profiles. This greatly relieves webmasters and business owners whose competitors aggressively copy their domain names and link strategies. Despite this practice, competing sites will not become more preferred by Google.



23. Confirmed: Anchor texts help Google to understand the topic of a page

Anchor texts help Google to define the theme of a page in a broader sense. This, of course, does not mean that you have to overuse certain anchor texts for higher ranking.



24. Google does not use IP addresses for geo-targeting and local SEO

In the past, Google has used the IP address of the hosting server. Nowadays, geo-targeting and Local SEO are most influenced by ccTLD, generic TLD, hreflang, Google My Business settings and Google Search Console.



25. HSTS is not used as a ranking signal

Google does not use HSTS as a ranking signal, but John advises implementing it once site ranking fluctuations due to migration to HTTPS have stabilized and when the migration is fully successful.



26. Click depth determines the importance of a page much more than the structure of its URL.

In other words, for Google, it is important to know how far a page is located from the site’s homepage in terms of determining its importance. It’s more important than the level or the structure of the URL.



27. Make sure scripts placed in <head> do not close it prematurely

In John’s words, there are sites that place scripts in the <head> that close it, or that should not be part of the <head> of the HTML code on a page. In these cases, Google won’t pay attention to the hreflang tag, for example, because it accepts that the <head> is already closed. John recommends to use the “View Code” function of the Google’s Rich Results tool to check for this issue.



28. Googlebot successfully recognizes faceted navigation and may delay crawling

GoogleBot manages to recognize URL structures, including faceted navigation. When it detects where the main content is and where it is deviated from, it will slow the crawl. This can also be greatly influenced by the definition of GSC parameters. John also emphasizes that defining console parameters is a much stronger signal than cannonicalization.



29. After moving your site to mobile-first indexing, it is possible that the cached pages of your site on Google will return 404

In John’s words, this is perfectly normal and it should not be an issue for crawling and indexing. The main reason is that when switching to mobile-first indexing, pages do not have a cached version on the search engine.



30. Average response times greater than 1,000ms may limit site crawling.

John recommends that the average response time should be around 100ms. Otherwise, Google won’t crawl as many pages as it would have done.



31. Redirect URLs can appear as soft 404 if many URLs are redirected to one.

URLs that redirect to other pages should not appear as soft 404 errors unless many pages have been deleted and redirected to a single page.



32. Incorrect or incomplete HTTPS migrations cause bigger ranking fluctuations

If you have migrated from HTTP to HTTPS and did not redirect all HTTP addresses to HTTPS with clear 301 redirects, or if you deleted a lot of pages or blocked bots using robots.txt, you should expect higher fluctuations in your site’s ranking.

Don’t forget to use 301 redirects because any other type such as 302 and 303 will make Google reprocess your URLs.



33. Lazy-loading images can be placed in the page code using noscript tags and Structured Data

It is important, for this type of image loading, for Google to see the source tag of the image. It can also be implemented via a noscript tag or Structured Data. So even if the image can not be rendered properly, whether partially or entirely, Google will associate it with the page.



OnCrawl Log Analyzer Log file analysis for bot monitoring and crawl budget optimization. Detect site health issues and improve your crawl frequency. Learn more

34. Do not use a 503 status over several days to keep a website in the search index

The main reason for this is that if a 503 status is returned when you load your site over multiple consecutive days, Google may assume that your site will not go back up, instead of counting it as temporary unavailable.

Note that 503 errors also reduce crawl rate and crawling stops if a request for the robots.txt file also returns 503.



[Ebook] Technical SEO for non-technical thinkers Technical SEO is one of the growing SEO fields today. It involves finding SEO solutions based on the how and why of how search engines–and websites–work. This ebook is everything you’ve always wanted to share with your clients, your friends, and your marketing teammates. Read the ebook

35. Google uses multiple signals to determine the canonical address from among a group of URLs

Google uses canonical tags, redirects, URL parameters, internal linking and sitemaps to determine which page is canonical from among a set of pages.

