At this stage in the coronavirus crisis, the government seems to have made error after error. The UK was slow to enforce a lockdown that seemed inevitable, shortages of personal protective equipment contributed to unnecessary deaths and it seems doubtful the government will meet its self-imposed target of 100,000 tests per day by the end of this week. While other European leaders have laid out roadmaps for how they plan to lift the lockdown, explaining the science informing their respective approaches, the UK’s own exit strategy remains unclear. Until the identities of 23 members of the Scientific Advisory Group for Emergencies (Sage) were revealed by the Guardian, they were shrouded in secrecy.

There is no such thing as the 'best science available'. Scientists regularly disagree

Yet at every turn, the government has told us that it is “following the science”. Its strategy, we are told, is informed by the “best science available”. Though scientific evidence can be a sound justification for government action (or inaction), the relationship between science, politics and society is far more complex than the government would have us believe.

To begin with, there is no such thing as the “best science available”. Scientists regularly disagree about different issues, from theoretical approaches to methodologies and findings, and decisions about what kind of scientific advice is taken into account are highly political. The individuals, disciplines and institutions that are invited to the table reflect the distribution of research funds, prestige and influence, as well as the values and objectives of politicians and policymakers. When it came to austerity, for example, the former coalition government ignored the warnings of many macroeconomists in favour of evidence that supported its worldview. If there is no “magic money tree”, there is certainly no magic “best science” tree either.

The purpose of scientific advisory committees such as Sage is to distil existing scientific research so it can inform policy. But the remit of such advice is limited by the questions that politicians ask in the first place. In emergency situations, these questions sound less like “What is the best science on [X]?” and more like “What kind of intervention is likely to prevent [Y]?” What policymakers choose to prioritise at these moments is a matter of political judgment. Is it the lives of the elderly and the ill? Is it the economy? Or is it political approval ratings? These decisions matter. It also matters what questions politicians don’t ask, such as whether coronavirus will disproportionately affect people from black and ethnic minority communities, or whether the effects of lockdown will be worse for women.

Politicians tend to favour the kind of science that aligns with their existing preferences. In the worst case, this can lead to cherrypicking data, pejoratively called “policy-based evidence”. But it needn’t be that extreme. For instance, studies suggesting a very high transmission rate for Covid-19 have been coming out of China since January, and Neil Ferguson, whose team was behind the study cited as precipitating Britain’s change of course in managing the pandemic, first sent his report to a Cobra meeting on 24 January. In February, studies suggesting that a substantial proportion of Covid-19 cases may be asymptomatic appeared in scientific journals. Evidence supporting general social distancing was already there. It took a political change of direction for this kind of data to be presented as “the science”.

This tells us something important about the social nature of scientific knowledge. Scientific models are estimates, not oracles. Scientists can tell politicians the conditions under which their models are likely to work, but they are not responsible for creating those conditions. Blaming epidemiologists for the consequences of the government’s Covid-19 strategy is like blaming climate scientists for not preventing the climate crisis. Scientists can provide evidence, but acting on that evidence requires political will.

When it comes to policymaking, economic and political considerations tend to take precedence. Britain’s delay in enforcing a lockdown was at least in part due to the desire to postpone, if not avoid, an economic recession. The government’s decision to keep schools open was motivated by its desire to allow people to keep working. The refusal to join the EU PPE procurement scheme was probably intended to favour local suppliers, as well as to avoid being seen as not “delivering” Brexit.

The one kind of “science” whose role in the government’s response isn’t clear is the science of surveying and shaping public opinion. The evidence list of SPI-B, the Sage subcommittee on behavioural and social interventions, lists 16 polls and surveys conducted between January and March, tracking risk awareness, perception, and public approval for different kinds of governmental interventions. Of course, it is possible to argue public approval is necessary for measures to be effective: if people disagree with certain measures they are more likely to evade them. But this ignores one important aspect about the relationship between science, politics and public opinion: people’s opinions about science are shaped by how facts are presented in official guidelines and in the media.

In this sense, public health advice – which, throughout most of March, focused on handwashing and isolation of symptomatic cases – might have led to a self-fulfilling prophecy: if people believed the official guidelines, it is not surprising they would have been reluctant to support stricter lockdown and social distancing. It was only after a number of independent experts, including the editor of the Lancet, Richard Horton, started openly questioning the government’s strategy, that public opinion overwhelmingly shifted in favour of a lockdown.

As long as both the results of these surveys and the Sage and Cobra minutes remain confidential, there is no way to tell exactly how the perception of public approval for different kinds of measures shaped the government’s strategy. But focusing attention only on one element in this chain – “the science” – eschews questions of political responsibility. How science is turned into policy depends upon political and economic calculations, as well as on the moral and ideological commitments of politicians, political parties and policy advisers. It’s rarely, if ever, just about the “science”.

• Jana Bacevic is a sociologist at the University of Cambridge

