Does Google favor its own sites in search results, as many critics have claimed? Not necessarily. New research suggests claims that Google is “biased” are overblown, and that Google’s primary competitor, Microsoft’s Bing, may actually be serving Microsoft-related results “far more” often than Google links to its own services in search results.

In an analysis of a large, random sample of search queries, the study from Josh Wright, Professor of Law and Economics at George Mason University, found that Bing generally favors Microsoft content more frequently, and far more prominently, than Google favors its own content. According to the findings, Google references its own content in its first results position in just 6.7% of queries, while Bing provides search result links to Microsoft content more than twice as often (14.3%).

The results from the new study by Wright, sponsored by the International Center for Law & Economics (ICLE) are important, especially given the challenges Google has recently faced from an FTC Inquiry over its business practices, antitrust complaints and Senate hearings looking into its alleged anti-competitive behavior.

The findings of the new study are in stark contrast with a study on search engine “bias” released earlier this year. That study, conducted by Harvard professor Ben Edelman concluded that “by comparing results across multiple search engines, we provide prima facie evidence of bias; especially in light of the anomalous click-through rates we describe above, we can only conclude that Google intentionally places its results first.”

How can the conclusions from two prominent scholars be so different? And, perhaps more importantly, given recent interest (and potential oversight) by lawmakers and regulators in search engine activities, what’s going on behind the scenes here?

A Tale Of Two Studies

First, some background. Harvard’s Ben Edelman has serious chops when it comes to search. He’s done thoughtful research into many important issues involving the dark side of the internet, including deceptive advertising, spyware, and so on. But: He’s also been a longtime paid consultant to Microsoft.

The new research from professor Wright, who has a deep interest in antitrust law and economics, was sponsored by the International Center for Law & Economics, with a mission “to create the academic underpinnings for a regulatory environment that ensures the protection of property rights from inefficient interference by government agencies and private parties in high priority markets.”

So, Microsoft has a paid consultant in its court. Did Google sponsor the new research, or influence its outcome? Google continues to increase spending on lobbying U.S. government officials to advocate its interests and ward off the attacks from competitors—was this another, more subtle approach to fend off critics?

Apparently not.

Although ICLE, which sponsored the research, has received financial support from several companies, organizations and individuals (including Google) Geoffrey A. Manne, Executive Director organization responded to my inquiry with this statement:

“The study was not done at Google’s request, and they had no involvement in the design, methodology or conclusions. Rather, the idea for the study and its execution were entirely Josh’s. It was undertaken independently and supported, as all of our affiliates’ supported work is, with an unrestricted grant from ICLE.”

Harvard’s Edelman released the results of his study in January of this year. As said above, Edelman concluded that he found “prima facie evidence of bias” that Google was promoting itself when it “shouldn’t” have been based on other alternative search results.

Search Engine Land’s editor-in-chief Danny Sullivan skewered Edelman’s results, writing that “statistics can easily be turned to whatever you want them to be. I feel like Edelman is turning his study into the most negative view possible. I’m just looking to provide some balance to that.”

Wright’s study had two missions: First, replicate the Edelman study to test its findings, and second, expand on it to eliminate perceived problems with methodology and conclusions. For the first part—replicating the Edelman study, Wright found that Google references its own content more favorably than rival search engines for only a small fraction of terms, whereas Bing is more likely to do so. “For example, in our replication of Edelman & Lockwood, Google refers to its own content in its first page of results when its rivals do not for only 7.9% of the queries, whereas Bing does so nearly twice as often (13.2%),” the report said.

For the second part of the study, Wright employed a much larger, random sample of search queries, rather than the small set (32 different searches) that Edelman performed. Wright’s expanded study found that Bing generally favors Microsoft content more frequently—and far more prominently—than Google favors its own content. Google references own content in its first results position when no other engine does in just 6.7% of queries, while Bing does so over twice as often (14.3%).

So, what conclusions to draw? Wright says that “analysis finds that own-content bias is a relatively infrequent phenomenon”—meaning that although Microsoft appears to favor its own sites more often than Google, it’s not really a major issue, at least in terms of “bias” or “fairness” of search results that the engines present. Reasonable conclusion: Google (and Bing, though less so) really are trying to deliver the best results possible, regardless of whether they come from their own services (local search, product search, etc) or not.

The Bigger Issue: Is “Search Neutrality” A Good Thing?

The study also looked at the whether search engines should be “neutral”—being “fair” to websites in terms of ranking rather than attempting to ferret out the best results for searchers.

It’s both a tradition and a blood sport in the U.S. for traditional media to transform the popular image of scrappy startups into “evil empires” when they’ve grown successfully into a dominant player. Microsoft has been through this; Google is now on the cusp of being the new creepy champ in the minds of many.

A large part of the media and regulatory oversight is now focused on whether Google is “anti-competitive“—typically shorthand for favoring its own content over that of other search engines. The Wright study concludes that the complaintants are simply wrong, saying “many of these complaints ignore the fact that search engine users self-select into different engines or use multiple sources for different types of searches when considering the competitive implications of search rankings.” Well said, and for most sophisticated internet users, probably true.

Why? Consider this scenario: You are going out to dinner in a new town tonight. So, to get recommended restaurants, are you going to Google that—or tweet a request for suggestions, or look up reviews on Yelp or Open Table or Chowhound or… My guess is that most people are going somewhere other than Google for this type of information. Or, for other types of information, going to WebMD or SeatGuru or IMDB or Wikipedia or countless other specialized sites (or mobile apps like Alfred or Ness) when they want more nuanced results than Google typically delivers. No question: Google has a lock on basic queries, and it’s really, really good for those. But the web is huge and asking Google is more often than not like asking the gas station attendant how to get somewhere when your GPS has died. In other words, internet users are smarter than critics give them credit for. We have options, and many are increasingly aware of our non-Google options (hello, Siri?).

But just because a company has grown into a dominant position doesn’t mean they’re doing wrong, or that governments should intervene and force changes that may or may not be “beneficial” to users or customers. I’m not going to rant about this. But in light of the findings of these studies, the pedigree of the researchers and the starkly contrasting opinions they offer, I’d encourage you all to read these analysis pieces, and form your own opinion—and contribute to the comments in the dialog below.

Link to the full study from professor Wright: Defining and Measuring Search Bias: Some Preliminary Evidence (pdf).

More reading (thanks to Gary Price for the links):

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.