Gravitational lenses have long been one of astronomy’s white whales, perplexing those who have devoted themselves to finding and studying them.

But by applying deep learning and computer vision to the abundant data generated by today’s powerful telescopes, scientists are on the verge of being able to use hundreds of thousands of gravitational lenses to expand our understanding of the universe.

Gravitational lenses occur when a galaxy, or a cluster of galaxies, blocks the view of another galaxy “behind” it, and the gravity of the first causes the light from the second to bend. This effectively makes the first galaxy a sort of magnifying glass for observing the second.

By first conclusively identifying gravitational lenses — which has proven to be a huge challenge — and then analyzing the telescope data, scientists not only can better observe those more distant galaxies, they also can gain understanding of the nature of dark matter, an unknown form of matter which seems to permeate our universe.

“There is lots of science to be learned from gravitational lenses,” said Yashar Hezaveh, a NASA Hubble postdoctoral fellow at Stanford University’s Kavli Institute for Particle Astrophysics and Cosmology. “We can use the data to look into the distribution of dark matter, and the formation of stars and galaxies.”

Delving Into Deep Learning

Until recently, scientists used large and sophisticated computer codes to analyze images. This required very large computations on superclusters and a significant amount of human intervention. But when Hezaveh and his team of researchers decided to apply computer vision and neural networks, everything changed.

“We had no expectations of how awesome it was going to be, or if it was going work at all,” said Laurence Perreault Levasseur, a postdoctoral fellow at Stanford University and a coauthor of a paper on the topic.

Another way to think about gravitational lenses is as funhouse mirrors, where the challenge is to remove the effect of mirror distortions and find the true image of the object in front of it. Traditional methods compare the observations against a large dataset of simulated images of that same object viewed in different distorted mirrors to find which one is more similar to the data.

But neural networks can directly process the images and find the answers without the need for comparison against many simulations. This can, in principle, speed up the calculations. But training a deep learning model that can understand how the various undulations affect the behavior of matter, not to mention our view of it, also requires enormous computing power.

Once Hezaveh and his team adopted GPUs to analyze the data, they had the speed and accuracy needed to unlock new knowledge of the universe. Using Stanford’s Sherlock high performance computing cluster, which runs on a combination of NVIDIA Tesla and TITAN X GPUs, the team was able to train its models up to 100x faster than on CPUs.

The resulting understanding of gravitational lenses is expected to provide a lot of fodder for those trying to understand the universe better.

“A lot of scientific questions can be addressed with this tool,” said Perreault Levasseur.

Wanted: Gravitational Lenses

Of course, to analyze data on gravitational lenses, you first have to find them, and that’s where complementary research underway by scientists at three universities in Europe comes into play.

Researchers at the Universities of Groningen, Naples and Bonn have been using deep learning methods to identify new lenses as part of the Kilo-Degree Survey (KiDS), an astronomical survey intended to better understand dark matter and the distribution of mass in the universe.

Carlo Enrico Petrillo, coauthor of a paper detailing the deep learning effort, said as many as 2,500 gravitational lenses could be uncovered using AI in conjunction with KiDS, even though the survey is only observing a small sliver (about 4 percent) of the sky.

But there was one significant challenge to making this happen: The lack of the kind of significant training dataset deep learning applications typically require. Petrillo said his team countered this by simulating the arcs and rings that surround gravitational lenses and incorporating them into images of real galaxies.

“In this way we could simulate gravitational lenses with all the specific characteristics, such as resolution, wavelength and noise, of the images coming from the surveys,” said Petrillo.

In other words, the team treated the problem as one of binary classification: galaxies surrounded by arcs and rings that match the simulations are labeled as lenses, and those that don’t are labeled as non-lenses. As the network learns from each simulation, researchers can narrow down candidates. The group’s paper notes this method initially enabled them to whittle 761 candidates down to a list of 56 suspected gravitational lenses.

NVIDIA GPUs helped to make this possible by slashing the time it takes to run a batch of images against the simulations. Doing so on a CPU required 25 seconds per batch, but a GeForce GTX 1080 GPU provides a 50x increase in speed. (The paper details results on an older generation GeForce GPU, but Petrillo recently upgraded to the newer one.)

“Using the CPU would have made my job hell,” he said.

Data Deluge Coming

As the innovations in telescopic and deep learning technology continue, the amount of data on gravitational lenses figures to increase substantially. For instance, Petrillo said the European Space Agency’s Euclid telescope is expected to produce tens of petabytes of data, while the Large Synoptic Survey Telescope in Chile will generate 30 terabytes of data each night.

That means lots of data to crunch, many gravitational lenses to be discovered and new space frontiers to be grasped — so long as scientists can keep up.

“Having a lot of lenses means building an accurate picture of the formation and evolution of galaxies, having insights on the nature of dark matter and on the structure of the space-time continuum itself,” said Petrillo. “We need efficient and fast algorithms to analyze all this data and, surely, machine learning will be common business among astronomers.”