Even in the modern world where there is more pay TV, there are few, if any, sources of professional video where consumers can know that they will encounter little or no advertising.

Pay TV networks such as Comcast, DirecTV and Time Warner Cable in the US – and Sky and Liberty Global in Europe – all carry the advertising which is sold into the channels they carry, and they add a sheen of their own advertising in the VoD, EPG or other portions of their content delivery.

Adverts are not only here to stay, but they are proliferating, and form a bedrock of revenue that prevents consumers from bearing the full financial cost of content creation.

And for as long as TV is watched every day within around 1.5 billion households globally, there will need to be ways of measuring the audience for programmes that are watched – and those which are barely tuned into – in order to put a value on that advertising.

This week, Time Warner Cable (TWC) put out a paper that attempts to make sense of new methods coming to market which try to put a different spin on the way TV audiences are measured. Nielsen is the big bad boss of the audience measurement industry and at Faultline we have had no problems criticising some of its frequent statements, mostly because the company continues to purvey the myth that TV viewing is growing, and while we would not argue that video watching is on the rise, we can see sometimes that straightforward linear TV viewing is dying on its feet.

We know of nobody who watches linear TV if it can be avoided, and yet Nielsen continues to put forward the idea that this is on the rise. This is, in part, because there is no room for a dissenting voice and Nielsen, almost a monopoly, can say what it likes.

Each year, sometimes each quarter, Nielsen changes its methodology slightly in order to make its numbers add up. TWC, along with the authors of the paper – titled "Programme Value in the Evolving Television Audience Marketplace" – have explored the idea that simply calculating how many hours a TV is switched on in a household is no longer enough, and that asking people what they watched does little more than that. Instead some measurement of "engagement" needs to emerge.

But while this paper is shrewd and intelligent at some levels, it entirely misses the point and analyses purely what‘s going on in audience measurement, rather than putting forward any theories of what else might be tried. You can download the paper here (PDF).

Why scraping comment from social media is just not accurate

The issue is that a number of measurement companies, perhaps as many as a dozen, are using screen-scraping and text-analysis tools to take comments off the public segments of social network and movie criticism sites, and attempts to parse them to establish how many positive and negative comments are made about each TV programme. They then use this as a measure of audience engagement, to go with Nielsen measures of People Meter samples which say which programmes were watched.

The paper cites operations such as General Sentiment, Radian6, Crimson Hexagon, Bluefin and Trendrr.tv, as well as opt-in versions.

GetGlue, Miso, IntoNow, Wikia, and SocialGuide all want to be the next Nielsen.

Right now US networks are all subscribing to multiple audience measurements, and with good reason. When a programme has a low level of absolute views, the only way to argue up the price of advertising on those networks is to claim – and perhaps prove – that the viewers are more "engaged," which might mean they are more loyal or that they actually pay attention when they watch this programme, instead of also doing their homework.

In fact, perversely, talking to friends on Facebook about a TV programme may indicate greater involvement, or not. It may be that a) splitting your attention between Twitter or Facebook and the TV, means you take your eyes off the screen more often, and b) that the people you are talking to may bring up other subjects and take your mind OFF what you are watching, leading to a lower level of engagement.

Another issue is that screen scraping of social media does not distinguish between comments made while watching the TV programme and those made in response to a question brought up after the programme ran. For instance a dialog on Facebook happening the morning after a popular show might go, "Did you see Twilight last night?" and the answer may be: "No, don‘t you think Taylor Lautner is mental?" "Yeah he‘s sick."

Now to a piece of software that is expected to analyse these words, it might conclude that Lautner is not popular. But of course that language is high praise when it comes from a female 14-year-old‘s vernacular. But beyond that, even if a comment on an actor is a poor one, it might be because he is a very well-acted villain, and surely that means a high level of engagement with the programme, not a low one? The whole idea of using social media has come up with these companies not because just because it‘s "possible" but because it is relatively easy to scrape comments from the public walls of Facebook and from Twitter feeds.

No one has said that this is the BEST way to measure audience engagement, but it‘s one that is available to lots of players. The paper makes this and other points and talks about this being problematic, and so is coming up with a semantic scoring system for comments. Surely measuring the volume of comments relative to comments about other shows tells us more than trying to work out who likes the show versus who doesn‘t.

While Facebook does not play in this market, it actually has the edge, in that it could peep inside of every Facebook message even the private ones, and let‘s face it, most social media comments made during TV shows are made in the private messages of Facebook, and these are the least available to mass distortion by the programme makers. They might otherwise create tons of fake Facebook addresses and fill them with commentary about a TV programme (or if you created a million fake Facebook accounts could they be made to talk non-stop about TV programmes, automatically?)