It's now been two weeks since the first major polling failure for a generation. Pollsters’ internal investigations are still ongoing, and few official statements have been made with regard to the findings. However NCP can reveal that a number of pollsters have found anomalies in their data that point to something far less technical than response bias, or wrongly-predicted likelihood to vote, but in fact to the classic shy Tory effect – people were simply lying about their voting intentions.

Most of the specifics of the investigations were provided off the record, but suffice to say the evidence seen by NCP is very solid. These findings aren’t necessarily to rule out other causes, and there may be overlap. Also it isn’t yet clear exactly what proportion of the 6.5 point polling error (in terms of the lead) can be explained by fibbing. But its impact doesn’t seem trivial.

The problem is also affecting recall – in other words how people say they voted at this month’s election, not just how they would vote at a new one. And it’s actually getting worse as time passes – shy Tories aren’t coming out of the closest, but if anything going back into it. That might explain why the exit poll – the fieldwork for which would have taken place literally seconds after voters had marked their cross – was so accurate, yet the on-the-day recalls – hours later – showed very little evidence of movement.

A case of representative but dishonest samples would also explain why leader ratings were, yet again, a highly accurate predictor of the result. It would also fit with Glen O’Hara’s theory – that people wouldn’t even admit to themselves that they were voting Conservative – which in turn might explain why people were seemingly equally willing to lie to a computer screen as a human interviewer.

Where does it leave the pollsters? If people were (and still are) actually lying rather than changing their minds, then it becomes a huge problem. And it’s already a problem for the next election too. Most voting intention polls are politically weighted – if the weighting is based on inaccurate recall, then future polls that rely on it will then run the risk of being wrong. Even if that problem can be addressed, it immediately raises another – if voters’ “recollections” of how they voted in 2015 change, how will pollsters know if that represents people becoming more or less honest, or whether it’s just common garden recall error? Some kind of judgement call would be needed.

YouGov/Times (2015 LAB): Cam doing well as PM 19 Govt m'ging econ well 25 Poll error was equivalent to just 10%… https://t.co/HOBPWJGHzH — NumbrCrunchrPolitics (@NCPoliticsUK) May 17, 2015

Not saying that these people are lying. But clearly something all pollsters need to look at — NumbrCrunchrPolitics (@NCPoliticsUK) May 17, 2015



But what about the broader problem of detecting people lying? We could be moving away from sampling and towards psychology. What next? Voice stress analysis of phone polls? The pollsters have their work cut out.