When you upload a photo of one of your friends to Facebook, you set into motion a complex behind-the-scenes process. An algorithm whirs away, analyzing the pixels in the photo until it spits out your friend’s name. This same cutting-edge technique enables self-driving cars to distinguish pedestrians and other vehicles from the scenery around them.

Can this technology also be used to tell a muon from an electron? Many physicists believe so. Researchers in the field are beginning to adapt it to analyze particle physics data.

Proponents hope that using deep learning will save experiments time, money and manpower, freeing physicists to do other, less tedious work. Others hope they will improve the experiments’ performance, making them better able to identify particles and analyze data than any algorithm used before. And while physicists don’t expect deep learning to be a cure-all, some think it could be key to warding off an impending data-processing crisis.

Neural networks

Up until now, computer scientists have often coded algorithms by hand, a task that requires countless hours of work with complex computer languages. “We still do great science,” says Gabe Perdue, a scientist at Fermi National Accelerator Laboratory. “But I think we could do better science.”

Deep learning, on the other hand, requires a different kind of human input.

One way to conduct deep learning is to use a convolutional neural network, or CNN. CNNs are modeled after human visual perception. Humans process images using a network of neurons in the body; CNNs process images through layers of inputs called nodes. People train CNNs by feeding them pre-processed images. Using these inputs, an algorithm continuously tweaks the weight it places on each node and learns to identify patterns and points of interest. As the algorithm refines these weights, it becomes more and more accurate, often outperforming humans.

Convolutional neural networks break down data processing in a way that short-circuits steps by tying multiple weights together, meaning fewer elements of the algorithm have to be adjusted.

CNNs have been around since the late ’90s. But in recent years, breakthroughs have led to more affordable hardware for processing graphics, bigger data sets for training and innovations in the design of the CNNs themselves. As a result, more and more researchers are starting to use them.

The development of CNNs has led to advances in speech recognition and translation, as well as in other tasks traditionally completed by humans. A London-based company owned by Google used a CNN to create AlphaGo, a computer program that in March beat the second-ranked international player of Go, a strategy board game far more complex than chess.

CNNs have made it much more feasible to handle previously prohibitively large amounts of image-based data—the kind of amounts seen often in high-energy physics.

Reaching the field of physics

CNNs became practical around the year 2006 with the emergence of big data and graphics processing units, which have the necessary computing power to process large amounts of information. “There was a big jump in accuracy, and people have been innovating like wild on top of that ever since,” Perdue says.

Around a year ago, researchers at various high-energy experiments began to consider the possibility of applying CNNs to their experiments. “We’ve turned a physics problem into, ‘Can we tell a car from a bicycle?’” says SLAC National Accelerator Laboratory researcher Michael Kagan. “We’re just figuring out how to recast problems in the right way.”

For the most part, CNNs will be used for particle identification and classification and particle-track reconstruction. A couple of experiments are already using CNNs to analyze particle interactions, with high levels of accuracy. Researchers at the NOvA neutrino experiment, for example, have applied a CNN to their data.

“This thing was really designed for identifying pictures of dogs and cats and people, but it’s also pretty good at identifying these physics events,” says Fermilab scientist Alex Himmel. “The performance was very good—equivalent to 30 percent more data in our detector.”

Scientists on experiments at the Large Hadron Collider hope to use deep learning to make their experiments more autonomous, says CERN physicist Maurizio Pierini. “We’re trying to replace humans on a few tasks. It’s much more costly to have a person watching things than a computer.”

CNNs promise to be useful outside of detector physics as well. On the astrophysics side, some scientists are working on developing CNNs that can discover new gravitational lenses, massive celestial objects such as galaxy clusters that can distort light from distant galaxies behind them. The process of scanning the telescope data for signs of lenses is highly time-consuming, and normal pattern-recognizing programs have a hard time distinguishing their features.

“It’s fair to say we’ve only begun to scratch the surface when it comes to using these tools,” says Alex Radovic, a postdoctoral fellow at The College of William & Mary who works on the NOvA experiment at Fermilab.