Next, at another lab down the hall, lengths of tape containing several brain slices each are mounted on silicon wafers and placed inside what looks like a large industrial refrigerator. The device is an electron microscope: it uses 61 electron beams to scan 61 patches of brain tissue simultaneously at a resolution of four nanometers.

Each wafer takes about 26 hours to scan. Monitors next to the microscope show the resulting images as they build up in awe-inspiring detail—cell membranes, mitochondria, neurotransmitter-­filled vesicles crowding at the synapses. It’s like zooming in on a fractal: the closer you look, the more complexity you see.

Slicing is hardly the end of the story. Even as the scans come pouring out of the microscope—“You’re sort of making a movie where each slice is deeper,” says Lichtman—they are forwarded to a team led by Harvard computer scientist Hanspeter Pfister. “Our role is to take the images and extract as much information as we can,” says Pfister.

That means reconstructing all those three-dimensional neurons—with all their organelles, synapses, and other features—from a stack of 2-D slices. Humans could do it with paper and pencil, but that would be hopelessly slow, says Pfister. So he and his team have trained neural networks to track the real neurons. “They perform a lot better than all the other methods we’ve used,” he says.

Each neuron, no matter its size, puts out a forest of tendrils known as dendrites, and each has another long, thin fiber called an axon for transmitting nerve impulses over long distances—completely across the brain, in extreme cases, or even all the way down the spinal cord. But by mapping a cubic millimeter as MICrONS is doing, researchers can follow most of these fibers from beginning to end and thus see a complete neural circuit. “I think we’ll discover things,” Pfister says. “Probably structures we never suspected, and completely new insights into the wiring.”

The power of anticipation

Among the questions the MICrONS teams hope to begin answering: What are the brain’s algorithms? How do all those neural circuits actually work? And in particular, what is all that feedback doing?

Many of today’s AI applications don’t use feedback. Electronic signals in most neural networks cascade from one layer of nodes to the next, but generally not backward. (Don’t be thrown by the term “backpropagation,” which is a way to train neural networks.) That’s not a hard-and-fast rule: “recurrent” neural networks do have connections that go backward, which helps them deal with inputs that change with time. But none of them use feedback on anything like the brain’s scale. In one well-studied part of the visual cortex, says Tai Sing Lee at Carnegie Mellon, “only 5 to 10 percent of the synapses are listening to input from the eyes.” The rest are listening to feedback from higher levels in the brain.

The colorized cubes are useful in 3-D illustrations of various neuronal structures and processes, giving scientists their most detailed map yet of what actually happens in the brain.

There are two broad theories about what the feedback is for, says Cox, and “one is the notion that the brain is constantly trying to predict its own inputs.” While the sensory cortex is processing this frame of the movie, so to speak, the higher levels of the brain are trying to anticipate the next frame, and passing their best guesses back down through the feedback fibers.

This the only way the brain can deal with a fast-moving environment. “Neurons are really slow,” Cox says. “It can take up to 170 to 200 milliseconds to go from light hitting the retina through all the stages of processing up to the level of conscious perception. In that time, Serena Williams’s tennis serve travels nine meters.” So anyone who manages to return that serve must be swinging her racket on the basis of prediction.

And if you’re constantly trying to predict the future, Cox says, “then when the real future arrives, you can adjust to make your next prediction better.” That meshes well with the second major theory being explored: that the brain’s feedback connections are there to guide learning. Indeed, computer simulations show that a struggle for improvement forces any system to build better and better models of the world. For example, Cox says, “you have to figure out how a face will appear if it turns.” And that, he says, may turn out to be a critical piece of the one-shot-learning puzzle.

“When my daughter first saw a dog,” says Cox, “she didn’t have to learn about how shadows work, or how light bounces off surfaces.” She had already built up a rich reservoir of experience about such things, just from living in the world. “So when she got to something like ‘That’s a dog,’” he says, “she could add that information to a huge body of knowledge.”

If these ideas about the brain’s feedback are correct, they could show up in MICrONS’s detailed map of a brain’s form and function. The map could demonstrate what tricks the neural circuitry uses to implement prediction and learning. Eventually, new AI applications could mimic that process.

Even then, however, we will remain far from answering all the questions about the brain. Knowing neural circuitry won’t teach us everything. There are forms of cell-to-cell communication that don’t go through the synapses, including some performed by hormones and neurotransmitters floating in the spaces between the neurons. There is also the issue of scale. As big a leap as MICrONS may be, it is still just looking at a tiny piece of cortex for clues about what’s relevant to computation. And the cortex is just the thin outer layer of the brain. Critical command-and-control functions are also carried out by deep-brain structures such as the thalamus and the basal ganglia.

The good news is that MICrONS is already paving the way for future projects that map larger sections of the brain.

Much of the $100 million, Vogelstein says, is being spent on data collection technologies that won’t have to be invented again. At the same time, MICrONS teams are developing faster scanning techniques, including one that eliminates the need to slice tissue. Teams at Harvard, MIT, and the Cold Spring Harbor Laboratory have devised a way to uniquely label each neuron with a “bar-coding” scheme and then view the cells in great detail by saturating them with a special gel that very gently inflates them to dozens or hundreds of times their normal size.

“So the first cubic millimeter will be hard to collect,” ­Vogelstein says, “but the next will be much easier.”

M. Mitchell Waldrop is a freelance writer in Washington, D.C. He is the author of Complexity and The Dream Machine and was formerly an editor at Nature.