By The Polling Observatory (Robert Ford, Will Jennings, Mark Pickup and Christopher Wlezien). You can read more posts by The Polling Observatory here.

This post is part of a long-running series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here and here.

It has been almost 18 months since the Polling Observatory’s last investigation of the Westminster polls, though the intervening period has seen dramatic political events – Britain’s vote to leave the EU, a change in Prime Minister, and much more besides.

The surprise result of the 2015 general election prompted much reflection on the reliability of polling methodologies – most notably in the report of the official inquiry into the pre-election polls – as did the outcome of the 2016 referendum on Britain’s membership of the EU. The vanquishing of the polls, and election forecasts, has added fuel to the bonfire of the experts. To populists, the unpredictability of voters may serve to further undercut the authority of elites.

While the events of 2015 and 2016 provided a valuable reminder that a dose of caution is needed when digesting the latest polls, they remain the best way of assessing relative shifts in public opinion.

As regular readers will know, we pool all the information that we have from current polling to estimate the underlying trend in public opinion, controlling for random noise in the polls. Our method controls for systematic differences between polling ‘houses’ – the propensity for some pollsters to produce estimates that are higher or lower on average for a particular party than other pollsters. While we can estimate how one pollster systematically differs from another, we have no way of assessing which is closer to the truth (i.e. whether the estimates are ‘biased’). This was where our election forecast came unstuck in 2015, as the final polls systematically over-estimated support for Labour and under-estimated support for the Conservatives.

Because most pollsters have made methodological adjustments since May 2015 – designed to address this over-estimation of Labour support – it is inappropriate to ‘anchor’ our estimates on their record at previous elections. Instead, we anchor our estimates on the average pollster. This means the results presented here are those of a hypothetical pollster that, on average, falls in the middle of the pack. It also means that while our method accounts for the uncertainty due to random fluctuation in the polls and for differences between polling houses, we cannot be sure that there is no systematic bias in the average polling house (i.e., the industry as a whole could be wrong once again).

Our latest analyses are based on polls up to April 18th, the day of the announcement of the general election to be held on June 8th. Since then, a number of polls have suggested an even larger Conservative lead – and it will be interesting to see if this is sustained in coming weeks of the campaign. The Polling Observatory’s headline figures currently put the Conservatives on 43%, far ahead of Labour on 25.7%. The Liberal Democrats at 10.5% have overtaken UKIP, at 9.8%, for the first time since December 2012. Meanwhile the Greens are lagging well behind at 4.3%.

Our estimates also provide insights on the trends in support for the parties since May 2015. Under David Cameron, support for the Conservatives had been slipping, especially in early 2016. It was only immediately following the EU referendum vote, and around the time that Theresa May took over as PM, that they have enjoyed a sharp rise in support. In contrast, Labour’s support has steadily been declining since April 2016 – from around the start of the EU referendum campaign. This is well before ‘the coup’ that some have blamed for Labour’s poor polling. We find no evidence to support those claims here.

While UKIP support rose steadily in the year following the 2015 general election, it slumped after the Brexit vote and has continued to decline since. It is too soon to write off UKIP for good, but it is clear that the party faces an uncertain future, threatened by an emboldened Conservative Party plotting Britain’s course out of the EU. By contrast, Brexit has given a renewed purpose to the Liberal Democrats, whose support has steadily been increasing since June 2016 – though hardly at a dramatic rate. The largely static support for the Greens highlights that Britain’s ‘progressive’ parties face an uphill battle to win back voters.

The trends since Brexit specifically point towards two gradual shifts: UKIP voters switching to the now more pro-Brexit Conservatives (with the blue and purple lines mirroring each other quite closely above), and the Liberal Democrats slowly recovering, seemingly at the expense of Labour who are slowly declining. The parties that appear to have benefited from Brexit are those now seen as the natural issue ‘owners’ of Leave and Remain.

So the two mainstream parties with clear Brexit positions are rising in the polls, while the one without a clear position (Labour) is declining steadily.

During the election campaign we will provide updates on the state of support for the parties. We will also be undertaking analyses of what ‘the fundamentals’ – such as party leader ratings and the state of the economy – tell us about the likely election result. Our aim will be to provide an assessment of election forecasts generated using different methods and data. After the experience of 2015, where the polling miss threw many forecasts off, we believe that this approach of triangulation may bolster confidence in expectations about the likely result – and also illuminate how different modelling choices and assumptions matter.

Robert Ford, Will Jennings, Mark Pickup and Christopher Wlezien