If you’ve been adversely affected by Google’s Penguin algorithm, you likely have debated about whether to try to recover from Penguin or to just start over with a brand-new website. If you’re in the process of making this decision, you may find this previous SEW post of mine useful.

John Mueller from Google has previously said that almost any site that is affected by one of Google’s algorithms can escape algorithmic demotion provided they clean up the signals that caused them to be affected. If you have been affected by Penguin, then you should be able to escape Penguin by either removing or disavowing your unnatural links, and cleaning up any on-page webspam issues. If you do a good job of this cleanup, then the next time that Google refreshes or updates the algorithm you should escape the Penguin filter. If you’ve got good links supporting your site, then you really should see improvement.

But…what if you don’t? When Penguin refreshed on October 17, 2014, I saw some awesome recoveries. But, I also saw several cases of websites that had done really thorough cleanups, had a good number of truly natural links, and saw zero improvement with the refresh. In some cases their rankings got even worse! I have a few theories as to what could be going on in these cases. I am wondering if perhaps Google started to roll out Penguin, saw that sites were successfully being demoted as the result of negative SEO attacks, and then they stopped the Penguin rollout. If this is true, then I believe that the webspam team is actively working on a solution. [Note: In the middle of me writing this article, we started to see further changes in Penguin-hit sites. Apparently the 3.0 slow rollout is still rolling out, some six weeks later. Things are still in flux.] Let’s hope they can fix things for the next update. If you’re interested in my theories as to why some sites are not recovering from Penguin when they really should, you can read my thoughts on my article called, “Something Is Wrong With Penguin.”

My point is that Penguin recovery is hard. It’s possible, but it’s not predictable. And if you don’t recover, there is no way of knowing whether it’s because you didn’t have enough good links, whether you didn’t clean up fully or whether there is something else going on.

If you have decided to start over, then there are a few things to consider. In this article I will discuss two important questions which I am commonly asked:

Can I use the same content on my new site as my old site? Can I redirect my customers from my old site to my new one?

Before discussing the answers to these questions I want to make it very clear that my intention is not to find ways to trick Google and find loopholes to allow you to escape Penguin and regain your high rankings. My goal is to discuss legitimate ways to start over with a new website. This means that you will be starting over with NO links. You will be truly starting from scratch.

1. Can I Use the Same Content on My New Site as My Old Site?

You’ve made the decision to start over. Can you just buy a new domain name and put your old website directly on to this domain? The answer is no. I’ll share with you the experience of a site owner whom I consulted with a few months after the initial Penguin rollout of April, 2012. He was a small business owner who had learned his own SEO and had previously been ranking really well on the power of self made links in directories, bookmark sites, and SEO articles. When Penguin decimated his rankings he decided that he wanted to make a fresh start with a new domain, scrap the old SEO techniques, and go completely white hat. He took his old content, put it up on a new domain, and worked on attracting links naturally. And then one day, he went to Webmaster Tools and saw that all of the old site’s links were now listed as pointing to the new site! When he clicked on any of them, he would see this:

These links were shown as “via this intermediate link: oldsite.com”

What was happening here is that Google recognized that the new site was essentially the same as the old site and Google applied an invisible canonical tag. This canonical tag caused Google to say, “We recognize that this content is the same as this other site, so we’ll apply all of this other site’s links to the new site.”

Shortly after this happened, I read this interesting post by Dejan SEO on an experiment they did which caused them to be able to see a competitors’ links by putting up a page of duplicate content. This duplicate content was recognized as a canonical of the original and links to the original page were seen in their Webmaster Tools with the tag, “via this intermediate link.”

Don’t worry if this subject is making your head spin. It’s hard to grasp. But the point is that when Google sees that a new site is the same as an old site, they may decide to attribute the old site’s links to the new site.

Also, John Mueller said in a hangout once that if you simply move a penalized domain to a new URL, Google will likely recognize that it’s the same site and penalize it as well. The same principles would likely apply to a site that is demoted by Penguin.

Ideally, if you are starting over with a fresh domain, it is best to write brand-new content so that Google can see this as a brand-new site. But, there may be another solution. With the site that I mentioned above, rather than rewriting all of the site’s content, we did the following:

We added a meta noindex, nofollow tag to all pages of the old site.

We used the URL removal tool in Webmaster Tools to ask Google to remove each and every page of the site. Note: You can use the tool to remove an entire directory from the site in one request. However, this will not remove the site from Google’s cache. We thought it was safest to get each URL removed from the index AND the cache. The only way to do this is to manually enter each URL one at a time into the removal tool.

Every day we did a site:oldsite.com search to see whether there were still pages in the index. It took a few days for the site: search to show that all of the pages were gone.

Once there were no pages of the old site in the index or the cache, we launched the new site.

This technique worked for this site. Google did not apply the old site’s spammy links to the new site. The new site has gone through several Penguin refreshes and has not been harmed. The site owner has earned some natural links and is now ranking this new site at number one to number three for most of his keywords.

2. Can I Redirect My Customers From My Old Site to My New One?

Most businesses have a lot invested in their domain name. If your site is BrandName.com and you have decided to abandon that domain and start again at TheBrandNameStore.com, how do you deal with customers who land on BrandName.com? There is a lot of misinformation out there in regards to how to do this. What I have tried to do in this article is discuss each of the methods that I have seen used and give my opinion on whether or not they would be safe.

Option 1: 301 Redirect

Do not do this. A 301 redirect is a redirect where you alter your .htaccess code so that users who land on oldsite.com will redirect to newsite.com. However, a 301 redirect will pass the vast majority of the link signals. If oldsite.com has a Penguin problem then you will be passing this Penguin problem to newsite.com. You may not see problems right away, as in some cases the new site doesn’t get affected by Penguin until the next rerun of the algorithm. But, you are asking for trouble if you take this route.

Option 2: 302 Redirect

This option will not work, either. A 302 redirect is a temporary redirect. An example of where you might use this is if you have a product that is temporarily out of stock and you want to redirect customers to a more appropriate page. Initially, a 302 redirect probably does not pass the link signals to the new page. However, if a 302 redirect is in place for long enough, Google will start to treat it as a 301 redirect which means that Penguin will follow.

In a Google Webmaster Hangout, John Mueller was asked whether 302 redirects pass PageRank and he said, “…If we see these redirects happening consistently over time then we’ll probably tend to treat them automatically in a special way…If we see a redirect consistently happening from one URL to the other one then that starts looking like a permanent redirect to us.”

In other words, a 302 redirect will eventually be treated like a 301 redirect and as such, is not a permanent way to redirect users and avoid passing Penguin signals.

Option 3: A Meta Refresh

At first I thought this was a valid option…but it turns out that this is not a recommended technique to safely redirect users to a new site and not pass on Penguin problems. Have you ever been on a page that said, “If you don’t get redirected in 5 seconds, click this link….”? That’s a meta refresh. It’s an old-school way to redirect users after a certain period of time. You can do a meta refresh of zero seconds, which looks to the user like a 301 redirect. Or you can do one that redirects after a certain amount of time like five seconds. So, does a meta refresh redirect pass link signals and PageRank? The answer is not completely clear, but it looks like some meta refreshes can pass PageRank. Also, Google has recommended against using this technique, saying “This kind of redirect is not supported by all browsers and can be confusing to the user.”

In this Google Webmaster Help Forum question John Mueller says, “In general, we recommend not using meta-refresh type redirects, as this can cause confusion with users (and search engine crawlers, who might mistake that for an attempted redirect).”

The general consensus is that a short meta refresh gets treated just like a 301 redirect, and a longer meta refresh may not pass link signals. But, the problem is that we don’t know this for sure. I think that attempting to redirect users via a meta refresh is too risky for a Penguin hit site.

Option 4: JavaScript Redirect

There is a small chance that this could be a safe way to redirect users, but given that Google is getting better at parsing JavaScript, this is probably not safe. The idea would be to use JavaScript and do something like this:

< script type="text/javascript"> < !-- window.location = "http://www.newsite.com/" //--> < /script>

In the past, this was believed to be a safe way to redirect a page without passing the link equity because Google had a hard time crawling and implementing JavaScript. But, Google is getting better at interpreting JavaScript now and there is a decent chance that they will recognize this as being the equivalent to a 301 redirect which would of course, redirect all of your bad links to your new site.

One thing that may possibly work is to do a JavaScript redirect with a nofollow attribute. In this video from 2009, Matt Cutts answers a question about advertising links that are JavaScript links. He talks about using a nofollow attribute within the JavaScript or using a robots.txt block. (We’ll talk about that soon.)

I am not an expert in JavaScript and unfortunately I could not find a good reference on how to add a nofollow tag to a JavaScript redirect when you are redirecting an entire page or site (as opposed to just a single link). If you’re reading this and know how this can be done, please leave a comment below.

Option 5: A Splash Page With a Nofollowed Link

In my opinion, this is probably the safest option. However, users will not be directly redirected to your new site. Instead, they will see a page on your old URL that says, “We have moved! You can find us at www.newsite.com.” I would then link to your new site with a nofollowed link. It’s not the prettiest option, but I think it’s the safest option.

Option 6: Redirect Through a Robots.txt Blocked Page

This technique is one that is probably safe to use to redirect users to your new site without passing on bad link signals. This option is a bit complicated. The idea is that you redirect users through an intermediate page that is blocked to search engines via robots.txt and then redirect them from that page to your new site. This will cause PageRank to be trapped in the intermediate page. As a result, users will be directed to the new page, but no link signals will be passed.

In a Webmaster Help Hangout, John Mueller was asked whether a URL 301 redirected through a page blocked by robots.txt would pass PageRank and he said, “It wouldn’t pass PageRank because with the robots.txt we wouldn’t see the redirect at all. If it’s blocked by robots.txt, then the URL that’s blocked collects the PageRank but the redirect doesn’t forward any of that.”

There are two ways to do this. The first is to use a random domain as the intermediate page:

Oldsite.com is the site that has unnatural links pointing at it. You would 301 redirect this domain to intermediate.com, which really can be any domain name. You may want to purchase something like “yourbrand2.com.” In the robots.txt file for intermediate.com, you want to block search engines from crawling it by including the following:

User-agent: * Disallow: /

What this does is allow the search engines to see the redirect to intermediate.com, but then the link signals pointing to it via the redirect get trapped in this page. You would then do a 301 redirect from intermediate.com to newsite.com. The result is that users who go to your old Penguin-hit site will get forwarded to the new site automatically, but the new site will have none of the old site’s links pointing to it. You will be starting over on the new site with no links.

There is also another way to do this that doesn’t involve having an intermediate page. The intermediate page is the way that Google recommends, but you can probably accomplish the same thing by just blocking your old domain by robots.txt and trapping the PageRank there. It would look like this:

To me this seems much simpler and it really should work to successfully trap PageRank in your old domain and forward users, but not link signals.

I looked back into my notes from all of the Webmaster Central help hangouts that I have watched and found a couple of places where John Mueller talked about doing this type of redirect. In this hangout at 19:23, John is asked about redirecting advertising links via a page that is blocked to search engines by robots.txt. It’s not exactly the same situation, but the question is whether this type of block would stop link signals from being transferred to the redirected page. Here was John’s response:

“Theoretically you can do that. I’d recommend doing that maybe on a separate domain so that those links really don’t pass any PageRank to your main domains. If you can’t disavow those links and those external links aren’t with a nofollow then redirecting them through a robot blocked domain is something you could do there as well. That will block the passing of PageRank to your final pages as well.”

In this hangout, at the 29-minute mark, John was asked whether a redirect to a new domain would pass on Penguin issues. He responded that it would, but here was an option that he gave:

“One thing you could do in a case like this where you see that there are is a problem with links to your site is to perhaps do a site move in a way that doesn’t pass PageRank. That could be creating a new website, maybe robots.txt’ing your old website so the links don’t get forwarded there and let users get to the new website with a redirect. Googlebot would be blocked by going to the new site and you could start on that new site essentially from fresh without all of those old signals attached. But, in most cases I’d really just recommend working to clean up those problems as much as possible so that you don’t have to think about changing to a new domain but rather work on a site that consistently builds up value over time.”

This is huge! Here, John is saying that it should be perfectly safe to block your Penguin-hit website from being crawled by search engines and then redirect it to your new site. This should allow you to start over fresh with a new site, redirect users to it, and not pass on Penguin issues. Remember, as stated above, that you also need to have new content or take steps to make sure that the old content is removed from the Web so that Google doesn’t see this as a carbon copy of a penalized site.

However, I haven’t tested this method. I have heard anecdotal reports of it working. In this article, Dale Rodgers talks about using a similar method. However, in his case, he is doing some tricky things to try to pass link equity from his good links rather than just starting over fresh. I have a few concerns about doing this because I have seen a number of Penguin-hit sites where I think it is impossible to find all of the unnatural links to disavow them. I think that this might be one of the reasons why some Penguin-hit sites are having trouble recovering despite doing as thorough a cleanup as possible. I would not recommend trying to trick Google and keep some of your links. If you are going to start over, the best way to avoid Penguin is to completely start over with a perfectly clean slate when it comes to links.

I have had some great Twitter conversations about the topic of how to redirect a Penguin-hit site with some SEOs who do a lot of penalty work: here, here, and here. We were discussing ways to redirect sites. I’d highly recommend reading the Twitter threads as there are some good thoughts in there. I mentioned the idea of redirecting via a robots.txt blocked page and here’s what Sha Menz said:

@Marie_Haynes @glenngabe I specifically asked Gary Ilyes at SMX East about redirecting via robots blocked page and was told not to do it — Sha Menz (@ShahMenz) November 26, 2014

Gah.

This is why I said earlier that the robots.txt block is probably OK, but I can’t guarantee it 100 percent.

Unfortunately Google has not communicated well with webmasters when it comes to Penguin. I can understand that Google does not want to give out all of its ranking signals and give spammers an unfair advantage. But, there are so many legitimate business owners who have been trapped in Penguin because they have hired SEO companies that did poor work. Penguin is extremely confusing even to me, and my entire mission in life right now is to understand Penguin. So how is a small business owner supposed to know what to do when they are demoted because of bad SEO?

Summary

This was a long article, and unfortunately I can’t give you absolute concrete advice. My purpose in writing this post was to get people talking and sharing their ideas on how to start over when you have been severely hit by Penguin and have decided that starting over is a better idea than trying to clean up. I want to emphasize again that I am not looking for ideas on how to trick Google, but rather, truly safe ways to wipe the slate clean and start over fresh.

In my opinion, the safest way to do this is to start a new site with new content and on your old site, put up a splash page that says “We have moved!,” with a nofollowed link to your new site. I do think that either of the robots.txt blocking options will work as well, but at this point there is still a little bit of risk.

I would highly encourage you to comment on this article. If you have used a method to redirect users from a Penguin-hit site to a fresh start, then share with us how you did it. It would also be helpful to mention when you were hit with Penguin, how long ago you started your new site and how things are going now.