Not so fast

Are consumers dissatisfied with their netbooks? Despite the form factor's explosive growth in popularity and seemingly bright future, a market analysis firm with a novel means of collecting and sorting information alleges exactly this. Biz360's recent aggregation and analysis of online netbook reviews and forum conversations around specific models indicates that, on the whole, users aren't too happy with netbooks in the critical areas of performance, display sizes, and features.

According to Biz360's analysis, which dredged more than 20,000 reviews from all corners of the internet, "consumer advocacy for the netbook category lags behind consumer advocacy for all laptops." To be specific, "Net Advocacy" (more on this term in a moment) was 40% lower, a mere 27.72 over the six brands searched for, versus 46.1 for notebooks.

But 2008 was the year of the netbook, right? So what's the genesis of this apparent paradox in consumer satisfaction? Let's take a look at Biz360's report and methods.

The "advocacy" algorithm

Biz360's "Opinion Insights" analysis method works by collecting reviews and commentary on consumer products from review and retailer websites, parsing them for content, ranking them by their sentiment and the prominence of their venue in web rankings, and finally aggregating them into a consensus level of satisfaction with different aspects of the product called "Net Advocacy." One might expect that Net Advocacy is found by subtracting LDL Advocacy from HDL Advocacy, and this is essentially what happens. Negative buzz is subtracted from positive, and the end result divided by the total volume of noise on the subject.

The end result is each product and attribute's Net Advocacy ranked on a scale from -100 to 100, with -100 signaling unanimous execration of a product's attributes, and 100 representing universal panegyric enthusiasm.

Picking apart the results

Biz360 says the method delivers "critical information that [their customers] can use to uncover the truth about brand preferences, specific product features and attributes that matter most, emerging trends, price elasticity, competitive vulnerabilities and much more." Certainly, this kind of analysis can offer up-to-the-moment information on online reputation, although selection bias probably reduces its value. Not only do consumers select whether to comment based on their impressions, and on their degree of tech-savviness, but retailers sometimes censor user reviews.

Biz360 says this selection bias isn't a problem, because the online reputation itself has value. Indeed, conventional wisdom and consumer surveys have indicated that purchasers make purchasing decisions based on online reviews. This has some problems too.

While it's possible to imagine a computer algorithm that could extract sentiments like "screen is too small" or "trackpad sucks," it's difficult to picture any rating extracting all the subtle clues that users sense in ranking reviews. The same sentiments might be completely dismissed based on subtle grammar errors, a telling technical mistake, or an unprofessional username. Moreover, some sentiments may appear frequently, yet fail to move purchasers at all. Even a simple measure, like the influence of the host website, may be easy to perceive (Newegg is more influential than Dell.com) but hard to quantify (10% more? 50%? 3 times?).