To most New Yorkers, navigating the subway might be second nature, in a city where 4.3 million people are transported through its tunnels, daily — but it took me some time to learn the difference between uptown and downtown, let alone to figure out how to make it past the turnstile.

How do we turn ourselves into creatures of habit, able to commit the stops of the 7 or the A train to memory, capable of navigating a city from beneath its surface, without the advantage of landmarks as a guide, taking a matter of minutes for what could be an hours-long complication for out-of-towners? Sure, there’s always plenty of maps on the wall, but sometimes that makes it even more complicated. If you still haven’t figured out the right side of the track at least on the first try — modern science may have help on the way.

Neuroscientists working at DeepMind — a London-based Google-owned lab that studies artificial intelligence — joined forces with a team of Oxford University researchers and folks from the University College London to investigate precisely how the human brain can adapt itself to navigate an elaborate network of underground trains. Unsurprisingly, the team decided to take on the London Underground, one of the largest subway networks in the world, with its 270 different stations and 250 miles of track. The study, published by the journal Neuron, gathered a group of 22 different test subjects, with varying degrees of familiarity with the Underground, and had them plan their journey through a virtual subway system.

The study’s participants were given a starting point and a destination and asked to plan a successful route as the researchers scanned their brains in an MRI machine. The scans revealed activity in the brain regions we use for making plans and deciding between choices. The team quickly began to realize that a task like finding the most-forward route to Baker Street is actually broken down by the brain into a number of tasks — all handled by numerous regions of the brain. When a traveler had to change lines at different stops, portions of the medial prefrontal cortex became active (used in retrieving long-term memories) as did the more-diverse premotor cortex (key in executing tasks, real and imaginary).

The experiment functioned like a video game, with each subway stop as another step. Each of the stops were connected together by alternately running lines that functioned as hierarchies. Reviewing the data, researchers found that, on average, the speed of neurons and of brain activity increased whenever participants had to change lines to get to their destination. A greater number of stops on a straight line, by comparison, made less of a difference.

This means that finding the right train — one that would have you moving along the right line — was recognized by the brain immediately as more crucial than determining the stop itself — as it prioritized these assignments as individual tasks. The participants also took a longer time when they planned journeys that consisted of several line changes — just as the average commuter would probably be more attentive to the listed railway stops when they needed to switch off from the 1 train at 171st Street rather than if they were merely taking the Q train home to Brighton Beach; a reason why it’s not uncommon to sleep past your stop.

Both the hippocampus — responsible for regulating the emotions we carry with our memories — and the ventromedial prefrontal cortex — which we use to assess the danger of any imminent risks — became even further stimulated as the test subjects moved closer to their final destination on the map. Planning a route activates memories of past journeys — so it may take a while to get over your difficulty with finding the American Museum of Natural History when your last subway trip somehow brought you to Queens.

So why is Google so interested in subways these days? Studies of the human brain like the one conducted by DeepMind could help companies make important advances in artificial intelligence — it’s just another step in making computers seem more human.

At present, artificial intelligence algorithms are able to estimate an array of potential consequences that taking a single action may enable — but by shedding a light on the numerous decision processes we make while carrying out our plan, they may soon be able to break new barriers, allowing us to further understand the processes of creative thought, in which two unlikely variables are usually brought together for a new and innovative approach to problem solving. At one time, we thought that creative thinking was reserved for only a few geniuses, but now with the advent of storytelling and drawing algorithms, the line between human and machine continues to blur.

“We’re interested in trying to find machine-learning solutions to difficult tasks and real-life problems,” said Jan Balaguer, one of the scientists at DeepMind, working on his doctorate at Oxford University. “Quite often it can be useful to draw inspiration from neuroscience.” While a machine may see several steps to take, people tend to work from a series of mentally constructed layers as they develop and carry out a plan something that the authors took note of in their paper.

Other studies have been done on this type of processing, but for Balaguer, few have been as direct about how the brain breaks down bits of information into hierarchies before going about completing a task. “We want to see how the human brain implements things like hierarchical structures in order to design more-clever algorithms,” he added. “In machine learning, having a hierarchical representation for decision-making might be helpful or harmful depending on whether you choose the right hierarchy to implement in the first place.”

DeepMind has already developed an artificial intelligence agent called AlphaGo, who successfully beat the world’s greatest player at the ancient Chinese board game Go in late 2016. But they have loftier goals for the near future — seeing how their artificial intelligence breakthroughs can be applied to health care — DeepMind announced a partnership with England’s National Health Service in early 2017.

For those of you who have had to struggle with reading subway maps amid heavy foot traffic and rattling cars, help may be on the way — as a Massachusetts Institute of Technology study several years ago looked at what exactly we process when trying to read maps, between their representations of local landmarks with brightly colored lines to show subway paths. The researchers placed subway maps of both New York City and Boston into a computer model that imitates the brain’s ability to absorb information from a passing glance. The study showed that the more abstract maps showing only colored lines with dots for stops were easier to process in a passing glance — where they could easily be interpreted by peripheral vision — whereas the more geographically accurate maps showing streets and landmarks were much easier to get lost in. Too much detail can not only tangle the information we need, but can also completely misrepresent what we are looking at — with some New York maps making the city appear four times larger than the scale, since on them subway stations are only represented by disproportionately small dots.

This article was originally published in the Summer 2017 issue of Brain World Magazine.

Related Articles