So: can we rely on the opinion polls to get the British general election right after all? After the debacle of 2015 – an election that was entirely focused on a hung parliament when in fact the Tories won at a canter – the results of the French election will have cheered the pollsters. They all got pretty close to the right vote shares for Emmanuel Macron and Marine Le Pen. And this follows the American election where, on average, they at least predicted Hillary Clinton’s lead in the popular vote.

In fact neither France nor the US offers much succour. If you take the second round in France, Macron’s predicted share of the vote ranges from 60.5% to 67% and so his lead is from somewhere between 21 percentage points and 34. Were the French election at all close, the polls would tell us next to nothing about the likely outcome. As for the US, anyone who followed the polls would have had a seesaw ride – Clinton leading by double-digits in some, trailing Trump in others. And to predict the wrong winner is after all no small error.

There are two reasons to be extremely cautious about the British polls this time. The first, subject to much chat wherever poll geeks gather, is the margin of error. But the second could in principle be even more important. The pollsters have next to no idea whether they are interviewing a representative sample of voters or not.

We often read that there is a plus or minus 2 or 3% statistical margin of error in a poll. But what we are rarely reminded of is that this error applies to each party’s vote. So if a poll shows the Tories on 40% and Labour on 34%, this could mean that the real situation is Tory 43%, Labour 31% – a 12 point lead. Or it could mean both Tory and Labour are on 37%, neck and neck.

But the statistical margin of error is only part of the problem. It is a purely mathematical construct and tells you what the error might be (strictly what range 95% of predictions would fall into) even if the pollsters were interviewing a representative sample of the population at large.

The pollsters cannot know that they are interviewing such a sample. The reason is simple: most voters when approached by a pollster refuse to answer. The pollster has very little idea whether these non-respondents are or are not differently inclined from those who respond. In the trade, this is referred to as polling’s “dirty little secret”.

This problem is getting worse. There used to be clear demographic factors that predicted how someone would vote. Working-class people were much more likely to vote Labour, for example. So if you got the right proportion of working-class people in your sample, you would probably get a more or less representative result. But the link between social class and voting has gradually weakened: Labour had more middle-class voters at the last election than working class. And so the simple demographic adjustments no longer work. Professor Patrick Sturgis in his magisterial report on the polls’ 2015 failure identified improved sampling as the key to better poll performance. Indeed: but how is it to be achieved? Anyone who has the holy grail to hand can expect a warm welcome from nervous pollsters.

The polls’ results in British general elections recently have not been impressive. They were rightish (in the sense of picking the right winner) in 1997, 2001, 2005 and 2010. They were catastrophically wrong in 1992 and 2015. As they would pick the right winner by chance one time in two, an actual success rate of 67%, against success by pin of 50%, is not impressive.

So can we totally disregard what the polls are now saying? Might we wake up on the morning of 9 June to find Jeremy Corbyn taking possession of 10 Downing Street? I don’t like to come between people and their dreams, but I think not. The polls can be wrong; the polls often are wrong; but they are not likely to be that wrong.