In 1964, an American computer scientist named John McCarthy set up a research centre at California’s Stanford University to explore an exciting new discipline: artificial intelligence.

McCarthy had helped coin the term several years earlier, and interest in the field was growing fast. By then, the first computer programs that could beat humans at chess had been developed, and thanks to plentiful government grants at the height of the Cold War, AI researchers were making rapid progress in other areas such as algebra and language translation.

When he set up his laboratory, McCarthy told the paymasters who had funded it that a fully intelligent machine could be built within a decade. Things did not pan out. Nine years after McCarthy’s promises, and after millions more had been ploughed into research around the world, the UK government asked the British mathematician Sir James Lighthill to assess whether it was all worth it.

Lighthill’s conclusion, published in 1973, was damning. “In no part of the field have the discoveries made so far produced the major impact that was then promised,” his report said. “Most workers in AI research and in related fields confess to a pronounced feeling of disappointment.”

Academics criticised Lighthill for his scepticism, but the report triggered a collapse in government funding, in the UK and elsewhere. It was seen as the catalyst for what became known as the first “AI winter”, a period of disillusionment and funding shortages in the field.