WHAT is the collective noun for a group of economists? Options include a gloom, a regression or even an assumption. In January, when PhD students jostle for jobs at the annual meeting of the American Economic Association, a “market” might seem the mot juste. Or perhaps, judging by the tendency of those writing economic papers to follow the latest fashion, a “herd” would be best. This year the hot technique is machine learning, using big data; Imran Rasul, an economics professor at University College, London, is expecting to read a pile of papers using this voguish technique.

Economists are prone to methodological crazes. Mr Rasul recalls past paper-piles using the regression-discontinuity technique, which compared similar people either side of a sharp cut-off to gauge a policy’s effect. An analysis by The Economist of the key words in working-paper abstracts published by the National Bureau of Economic Research, a think-tank (see chart), shows tides of enthusiasm for laboratory experiments, randomised control trials (RCTs) and the difference-in-differences approach (ie, comparing trends over time between different groups).

When a hot new tool arrives on the scene, it should extend the frontiers of economics and pull previously unanswerable questions within reach. What might seem faddish could in fact be economists piling in to help shed light on the discipline’s darkest corners. Some economists, however, argue that new methods also bring new dangers; rather than pushing economics forward, crazes can lead it astray, especially in their infancy.

In 1976 James Heckman developed a simple way of correcting for the problem of a specific type of sample selection. For example, economists had difficulty estimating the effect of education on women’s wages, because the ones who chose to work (for whom pay could be measured) were particularly likely to enjoy high returns. When Mr Heckman offered economists a simple way of correcting this bias, which involved accounting for the choice to enter work, it took the social sciences by storm. But its seductive simplicity led to its misuse.

A paper by Angus Deaton, a Nobel laureate and expert data digger, and Nancy Cartwright, an economist at Durham University, argues that randomised control trials, a current darling of the discipline, enjoy misplaced enthusiasm. RCTs involve randomly assigning a policy to some people and not to others, so that researchers can be sure that differences are caused by the policy. Analysis is a simple comparison of averages between the two. Mr Deaton and Ms Cartwright have a statistical gripe; they complain that researchers are not careful enough when calculating whether two results are significantly different from one another. As a consequence, they suspect that a sizeable portion of published results in development and health economics using RCTs are “unreliable”.

With time, economists should learn when to use their shiny new tools. But there is a deeper concern: that fashions and fads are distorting economics, by nudging the profession towards asking particular questions, and hiding bigger ones from view. Mr Deaton’s and Ms Cartwright’s fear is that RCTs yield results while appearing to sidestep theory, and that “without knowing why things happen and why people do things, we run the risk of worthless causal (‘fairy story’) theorising, and we have given up on one of the central tasks of economics.” Another fundamental worry is that by offering alluringly simple ways of evaluating certain policies, economists lose sight of policy questions that are not easily testable using RCTs, such as the effects of institutions, monetary policy or social norms.

Elsewhere in economics one methodology has on occasion crowded others out. An excess of consensus among macroeconomists in the run-up to the financial crisis has haunted them. In August, Olivier Blanchard, a heavyweight macroeconomist, wrote a plea to colleagues to be less “imperialistic” about their use of dynamic stochastic general equilibrium models, adding that, for forecasting, their theoretical purity might be “more of a hindrance than a strength”. He issued a reminder that “different model types are needed for different tasks.”

Still crazy after all these years

Machine learning is still new enough for the backlash to be largely restricted to academic eye-rolling. But some familiar themes are emerging in this latest craze. In principle, these new techniques should protect economists from their own sloppy theorising. Before, economists would try to predict things using only a few inputs. With machine learning, the data speak for themselves; the machine learns which inputs generate the most accurate predictions.

This powerful method appears to have improved the accuracy of economists’ predictions. For example, researchers have started to use big data to predict whether a criminal suspect is likely to come back to court for a trial, influencing bail decisions. But, as with RCTs, a powerful algorithm might seduce its users into ignoring underlying causal factors. In her new book,“Weapons of Math Destruction”, Cathy O’Neil, a data scientist, points out that some factors, such as race or coming from a high-crime neighbourhood, might be excellent predictors of recidivism. But they could reflect racism in law enforcement or zero-tolerance “broken windows” policies that lead to high recorded crime rates in poor or minority neighbourhoods. If so, those predictions risk punishing people for factors beyond their control.

Mr Rasul is not very worried by the “little bit of overshooting” that excitement at new methods engenders. Over time, their merits and limitations are better appreciated and they join the toolkit alongside older methods. But the critics of faddishness have one thing right. Good economics is about asking the right questions. Of all the tools at the discipline’s disposal, its practitioners’ scepticism is the most timeless.