There’s a lot of things flying around the SEO echo chamber that are simply untrue. Let’s talk about 3 of them.

It goes without saying that the SEO Community is the most valuable asset that we have to determine how to do our jobs. In fact, aside from the few bones that Google throws us, almost everything we all know about SEO is due to people figuring something out and sharing it.

However I often imagine Google employees sitting on Android couches or racing their segways and giggling about how they’ve injected misinformation into our world and watched it bounce around until all believe it. They probably even a version of Ripples just to target the spread of misinformation (for the record this is hyperbole, I don’t want to see it in your next conference deck).

What I’ve noticed lately in my timeline is a lot of this misinformation. There’s a few things that I feel can be empirically proven otherwise so I felt it’s a good time to set the record a little closer to straight.

Shall we?

Author Rank is Affecting Rankings Right Now

It was late 2011 and I had a lot of time to dive into patents and kick around new ideas on Twitter. I was one of the first people to postulate that Author Rank was coming (trust me, at this point I’m definitely not bragging). Simon Penson also made a strong case for it in his guest post on SEOGadget “Life After Link Trust” Then Bill Slawski went patent diving and brought the concept of AgentRank back to life. I say back to life because he talked about it in 2007 as well.

The unfortunate outcome however is that people are touting Author Rank as though it is something that is alive and affecting rankings right now. Granted Searchmetrics did a study last year showing 17% of SERPs are showing the rel-author tag, but [insert witty correlation is not causation one-liner here].

Naturally everyone is now citing the quote from Eric Schmidt’s “The New Digital Age.”

Within search results, information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance.



To be clear, I’m excited by that quote, in fact I’m looking forward to seeing this actually happen. I’m also telling my clients to make Google+ accounts and ensuring that they place for rel-author and rel-publisher on relevant pages.

However therein lies another point that needs to be addressed, Google has enlisted SEO companies as their influencers to push rel-author and Google+. I think we already know how good we are getting clients to implement something that doesn’t have some definitive value (Schema.org anyone?). I personally don’t see Social Media agencies pushing for rel-author, nor do I see Creative agencies pushing for it.

I asked Duane Forrester once if Bing would consider using Open Graph tags as their version of rel-author citing what I believed to be high adoption across the web. He explained to me that the adoption rate of Open Graph is nowhere near as high as I believe and that the validity of any signal is absolutely determined by how much of the web has adopted it.

Hmm…valid point Duane…valid point.

Fortunately, we can get a general sense of the adoption rate of rel-author across the web using underdog search engine Blekko. Blekko does a web-scale GREP to extract features from billions of websites. I’ve read that the sample size is of every web grep is based on their cache which as of September 15th, 2012 was 4 billion. In December of 2012 they did a web grep searching for URLs that had implemented rel-author.

Of the 4 billion pages they crawled they identified 222 million pages across nearly 1 million domains with rel-author. Just to save you the mental math, that’s about 5.6%. If only 5.6% pages have rel-author there’s not enough of a signal to base large scale web page classification.

In other words Author Rank is on the way, but it’s not likely that it’s rolled out widely as a ranking factor unless you’re in a news vertical where adoption appears to be quite high.

A Change to Google’s Support Docs is their Admission that Negative SEO is Real

I don’t disagree with the statement that negative SEO is a real problem, but there are SEOs that believe that there is a doc on Google’s support site that was updated in May of 2012 that acts as their admission of negative SEO. According to those SEOs, this doc located at (http://support.google.com/webmasters/bin/answer.py?hl=en&answer=34449) once read “There’s nothing a competitor can do to harm your ranking” and in May of 2012 it was updated to say “Google works hard to prevent other webmasters from being able to harm your ranking or have your site removed from our index.”

Deep breath. Big sigh.

According the Wayback Machine that original text was actually never on the page in question when I went as far back as April 2, 2012:

In fact that copy that people say it has been changed to is located at http://support.google.com/webmasters/bin/answer.py?hl=en&answer=34444 where it has been since at least 2010:

I don’t know who made that story up, but people have accepted it site/sight unseen. Things that make you go hmm…

Paid Links Are Easier to Get than Organic Links

This one used to get on my nerves the most when I first started at iAcquire. I continually extolled the virtues of Organic links only to be met by people that didn’t believe in their ability to drive rankings and many also believed it was more difficult to build links without padding people’s pockets.

Our shift from using paid linking as a tactic to solely building links organically left us with a ton of great data. Most importantly with that shift happening at the 6 month mark of 2012 we have 6 months of links built with Paid tactics to compare against 6 months of links build with Organic tactics.

And here it is…

What this histogram illustrates is the volume of Organic links we were able to build in one 6 month period vs. the volume of Paid links we were able to build in the previous 6 month period. The 0-6 series on the right is the MozRank of those links. This data proves not only is it possible to build more links Organically in the same time frame, but also more links of higher quality.

I spoke on this at length in my Quantifying Outreach 2012 webinar at SEMPO and we’ll be releasing the white paper soon.

Everybody Should Test Everything

What I used to find positively intimidating when I’d watch talks and read posts from old school SEOs is their tests and the data they’d share. I’d wonder to myself how did they come up with those ideas? I’d also be incredibly grateful to them for sharing their awesome secrets.

When I started speaking and sharing my own ideas in blog posts I wanted to contribute to the evolution of the groupthink. I wanted to open up the conversation and push more open source thinking often sharing my methodologies, code and tools so you can do it yourself and bring back your own insights. I’ve always felt that at the end of the day my results don’t matter, it’s about what the rest of the SEO community can replicate and add on to make the idea grow.

Is there anything you’d like to see us test and what have you heard in the echo chamber that you believe is blatantly untrue?