ROBOTS controlled by remote supercomputers. Self-driving cars on narrow, winding streets. Board-game players of unimaginable skill. These successes of artificial intelligence (AI) rely on neural networks: algorithms that churn through data using a structure loosely based on the human brain, and calculate functions too complex for humans to write. The use of such networks is a signature of firms in Silicon Valley. But they were largely invented not in California but in Canada.

How did this breakthrough emerge from the land of moose and maple syrup? Canada cannot compete with America in research funding. Instead, it has made a virtue of limited resources, developing an alternative model of innovation based on openness to unorthodox ideas.

The roots of Canada’s contributions to AI reach back decades. In 1982 Fraser Mustard (pictured, centre), a doctor, founded the Canadian Institute for Advanced Research (CIFAR). He envisioned it as a “university without walls”, in which researchers could work across disciplines. Funded by the Canadian government, CIFAR encouraged its fellows to share their best ideas rather than guarding them jealously.

Five years later Geoffrey Hinton, an English polymath, joined CIFAR and began work on the primitive field of neural networks. After a long hiatus, he returned in 2003 to set up a CIFAR group dedicated to neural networks, called Neural Computation and Adaptive Perception (NCAP).

NCAP, now called Learning in Machines and Brains, funds researchers from all over the world, not just Canada. It does not require them to work in the same place. But by providing them with modest financing and a framework for collaboration, it has created a breeding ground for out-of-the-box ideas. Mel Silverman, who ran NCAP for CIFAR, remembers a young member scrawling equations on a whiteboard that were intended to be a mathematical description of consciousness.

Before long, NCAP began to produce a stream of the most cited research in AI, and its members spread to leading firms. In 2006 Mr Hinton published a paper with Ruslan Salakhutdinov, who now leads Apple’s AI efforts, showing that neural networks can simplify complex models down to just a few variables. Six years later, two of Mr Hinton’s students used neural networks to win an image-recognition contest with a system twice as accurate as the runner-up. Google hired both of them.

Canada’s open-information strategy makes it hard for the government to measure the return on its investment. Nonetheless, the soft benefits are clear. In June a new research lab called Element AI raised $102m. In recent months Google has set up an office in Toronto, and Facebook and Samsung have opened AI labs in Montreal.

CIFAR’s model has worked for topics other than AI, though not to the same extent. One project is exploring how interactions between genes and the environment affect human development; another focuses on the network of bacteria in the human gut, which is important for health. Neither has yet yielded an industry-changing technology like neural networks, but both have that potential. The coming years will show whether Canada got lucky with NCAP, or whether its unique approach to innovation will continue to bear fruit.

Correction (November 16th 2017): The original version of this story suggested that Sebastian Thrun and Jeff Hawkins were members of the Neural Computation and Adaptive Perception group, which studied neural networks. That was incorrect.