Unlike a classical computer bit, which can be either on or off, a quantum bit—or qubit—can be on, off, or a superposition of both on and off, allowing for computations to be performed simultaneously instead of sequentially.

Quantum computers allow for a more realistic representation of quantum processes. They take advantage of a phenomenon known as superposition, in which a particle such as an electron exists in a probabilistic state spread across multiple locations at once.

“The systems that we deal with in particle physics are intrinsically quantum mechanical systems,” says Panagiotis Spentzouris, head of Fermilab’s Scientific Computing Division. “Classical computers cannot simulate large entangled quantum systems. You have plenty of problems that we would like to be able to solve accurately without making approximations that we hope we will be able to do on the quantum computer.”

Feynman was imagining a quantum computer, a computer with bits that acted like the particles of the quantum world. Today, nearly 40 years later, such computers are starting to become a reality, and they pose a unique opportunity for particle physicists.

“Nature isn’t classical, dammit,” Feynman told his audience, “and if you want to make a simulation of nature, you’d better make it quantum mechanical.”

In a 1981 lecture, the famed physicist Richard Feynman wondered if a computer could ever simulate the entire universe. The difficulty with this task is that, on the smallest scales, the universe operates under strange rules: Particles can be here and there at the same time; objects separated by immense distances can influence each other instantaneously; the simple act of observing can change the outcome of reality.

This not only speeds up computations; it makes currently impossible ones possible. A problem that could effectively trap a normal computer in an infinite loop, testing possibility after possibility, could be solved almost instantaneously by a quantum computer. This processing speed could be key for particle physicists, who wade through enormous amounts of data generated by detectors.

In the first demonstration of this potential, a team at CalTech recently used a type of quantum computer called a quantum annealer to “rediscover” the Higgs boson, the particle that, according to the Standard Model of particle physics, gives mass to every other fundamental particle.

Scientists originally discovered the Higgs boson in 2012 using particle detectors at the Large Hadron Collider at CERN research center in Europe. They created Higgs bosons by converting the energy of particle collisions temporarily into matter. Those temporary Higgs bosons quickly decayed, converting their energy into other, more common particles, which the detectors were able to measure.

Scientists identified the mass of the Higgs boson by adding up the masses of those less massive particles, the decay products. But to do so, they needed to pick out which of those particles came from the decay of Higgs bosons, and which ones came from something else. To a detector, a Higgs boson decay can look remarkably similar to other, much more common decays.

LHC scientists trained a machine learning algorithm to find the Higgs signal against the decay background—the needle in the haystack. This training process required a huge amount of simulated data.

Physicist Maria Spiropulu, who was on the team that discovered the Higgs the first time around, wanted to see if she could improve the process with quantum computing. The group she leads at CalTech used a quantum computer from a company called D-Wave to train a similar machine learning algorithm. They found that the quantum computer trained the machine learning algorithm on a significantly smaller amount of data than the classical method required. In theory, this would give the algorithm a head start, like giving someone looking for the needle in the haystack expert training in spotting the glint of metal before turning their eyes to the hay.

“The machine cannot learn easily,” Spiropulu says. “It needs huge, huge data. In the quantum annealer, we have a hint that it can learn with small data, and if you learn with small data you can use it as initial conditions later.”

Some scientists say it may take a decade or more to get to the point of using quantum computers regularly in particle physics, but until then they will continue to make advances to enhance their research.

Quantum sensors

Quantum mechanics is also disrupting another technology used in particle physics: the sensor, the part of a particle detector that picks up the energy from a particle interaction.

In the quantum world, energy is discrete. The noun quantum means “a specific amount” and is used in physics to mean “the smallest quantity of energy.” Classical sensors generally do not make precise enough measurements to pick up individual quanta of energy, but a new type of quantum sensor can.

“A quantum sensor is one that is able to sense these individual packets of energy as they arrive,” says Aaron Chou, a scientist at Fermilab. “A non-quantum sensor would not be able to resolve the individual arrivals of each of these little packets of energy, but would instead measure a total flow of the stuff.”

Chou is taking advantage of these quantum sensors to probe the nature of dark matter. Using technology originally developed for quantum computers, Chou and his team are building ultrasensitive detectors for a type of theorized dark matter particle known as an axion.

“We’re taking one of the qubit designs that was previously created for quantum computing and we’re trying to use those to sense the presence of photons that came from the dark matter,” Chou says.

For Spiropulu, these applications of quantum computers represent an elegant feedback system in the progression of technology and scientific application. Basic research in physics led to the initial transistors that fed the computer science revolution, which is now on the edge of transforming basic research in physics.

“You want to disrupt computing, which was initially a physics advance,” Spiropulu says. “Now we are using physics configurations and physics systems themselves to assist computer science to solve any problem, including physics problems.”