Infinite scrolling websites are the latest trendy thing for webmasters to do on webpages it seems. It especially became popular with social media sites such as Twitter, Pinterest, and Facebook, but it has now made its way onto regular websites, including news sites.

However, scrolling websites can cause trouble for Googlebot. You need to make sure bots can properly crawl your infinite scrolling website so that it your search rankings aren’t negatively impacted.

The issue with infinite scrolling websites was actually brought up by Google’s Matt Cutts at his Pubcon keynote in October. He gave the example of Twitter specifically, saying that while Google tries to do a good job with infinite scrolling, other search engines don’t necessarily crawl them as well, especially as bots might not stick around for a page to endlessly load before it leaves for another page. So last year he suggested that sites also have static links with the pagination structure so that bots can crawl all of pages correctly.

The problem is that many of these websites don’t have a pagination structure in place that enables a crawler to discover all the content properly, or it completely falls apart once JavaScript is disabled without an alternative non-JavaScript structure in place.

Sometimes this is an option offered in a content management system (CMS). But other websites that have great-looking infinite scroll websites lack the structure needed to make it easily accessible by search crawlers beyond the initial page load.

Apparently this is a growing concern for Google. A blog post on the Webmaster Central blog deals specifically with recommendations on how to ensure your infinite scroll websites are search friendly.

In the new blog post, John Mueller, a Webmaster Trends Analyst, says that items aren’t discoverable by crawlers after the initial page load with traditional infinite scroll sites. So they have created their own best practices of what you should do in order to ensure all your content is being crawled by Googlebot and bots in general.

The blog post includes lots of things like how to structure your URLs, how to structure pagination when you have JavaScript disabled and things to configure in the page’s head.

Mueller also advises making it easy for visitors to find what they’re looking for when they land on a page. Don’t make users scroll infinitely to locate whatever specific thing they were searching for on the page. From a usability perspective, a lot of times it’s not even located on the page anymore because it has been pushed back too far, and the searcher will end up going to a competitor’s website instead to find what they were looking for.

Google also wants webmasters to ensure there’s no duplicate content issues. This is often caused by having overlaps in pagination. But he says this is not search engine friendly and you want to make sure that it structured in order to not have duplicate items on duplicate pages.

Mueller also pointed to an article, “Infinite Scrolling is Not for Every Website“, which discusses in more detail some of the user experience issues caused with infinite scrolling and why you shouldn’t implement infinite scrolling on certain types of sites.