When you close your eyes, you don’t assume that the world is no longer there simply because you can’t see it. Similarly, everything we know about the age of the Earth, Sun, and moon suggests that all these celestial bodies were doing their thing long before some mostly hairless monkeys evolved to appreciate them. But what if your observation of the world was actually creating it?

It’s a trippy and counterintuitive idea, but one of the largest ever participatory experiments in physics just gave this hypothesis a major boost. Known as the BIG Bell test, it involved over 100,000 people using their cell phones to contribute data to 12 quantum research institutes around the world. These volunteers—known as Bellsters—played a video game to instruct over 100 scientists how to perform measurements on entangled particles and superconducting devices.


The experiment was meant to close the “freedom-of-choice” loophole in quantum experiments, which basically amounts to the notion that particles may influence the way researchers choose to measure them. By having these measurements dictated by a diverse group of 100,000 strangers, however, it would be impossible to predict in advance how the measurements would be made. This would, in principle, give researchers insight into whether the world exists independently of our observations of whether our observations shape the world.

The first results from the study were published today in Nature, and suggest that our observation of the world strongly influences it.

EINSTEIN’S SPOOKY ACTION

The idea that the world exists independently of our observation of it is an integral part of Einstein’s theory of local realism, which states that not only do particles have values before we measure them, but that they are also bound by the speed of light. This last part is called the principle of locality, which means that objects can only be influenced by other objects in their vicinity and this link between cause and effect cannot happen faster than the speed of light. If objects could affect one another instantaneously, this would pose a serious problem for Einstein’s theory of relativity, which depends on effects following causes.

Local realism seems intuitively true, but it was a major point of contention between Einstein and his buddy Niels Bohr, a physicist whose work was foundational in quantum mechanics. Contrary to local realism, Bohr suggested that we effectively create—or at least alter—the world by measuring it. This means that not only do particles not have definite values before we measure them, but talking about things like the position of an atom is meaningless until it is measured.


In a famous paper published in 1935, Einstein and two other physicists disputed Bohr’s interpretation, and argued that it resulted in a paradox where information could be shared instantaneously by two particles. This “spooky action at a distance” would violate relativity insofar as the effect on a distant, entangled particle is not the result of a past cause. Einstein and his colleagues argued that this meant Bohr’s description of reality was insufficient to describe what was really going on. Instead, they chalked this up to some “local hidden variables” that were influencing the entangled particle. In other words, its not that the values of a particle don’t exist prior to being measured, its that we haven’t yet found all the hidden variables that would allow us to know the values associated with the particle, such as its position.

The Einstein-Bohr debate simmered on for another three decades until a physicist named John Stewart Bell claimed that Einstein’s classical physics, even allowing for hidden local variables, could never reproduce the predictions of quantum mechanics. And he designed a test to demonstrate why this is the case.

The Bell test basically involves generating a pair of entangled particles—usually photons—and sending them to different locations, where one of their properties—such as time of arrival, color, or spin—is measured. If the measurements of each particle are the same it implies one of two things: either the measurement of one particle instantly affected the property of the other particle, or that the measurement itself resulted in the particle having that property. If the measurements disagreed, this would validate Einstein’s theory of local hidden variables influencing the particles.


Over the past few decades, however, dozens of Bell tests have been performed and so far all of them support quantum mechanics rather than Einstein’s theory of hidden variables.

Two ICFO researchers play the video game that generated randomness for the quantum experiment. Image: ICFO

THE BIG BELL TEST

Although each of these Bell tests have strengthened the position of the quantum mechanical view of reality, none of them can definitively settle Einstein and Bohr’s disagreement. The reason for this is that so far, a perfect Bell test hasn’t been conducted. Instead, each Bell test has been subject to at least one loophole, which allows the result of the test to be interpreted in such a way that is still consistent with local realism.

One of the least addressed Bell test loopholes is the so-called “freedom-of-choice” loophole, which suggests that the way a researcher chooses to measure a quantum particle could influence the results of that measurement. According to Morgan Mitchell, the lead researcher for the Big Bell Test and a professor at the Institute for Photonic Sciences (ICFO) in Spain, in past Bell tests a researchers would select some aspect of a quantum particle itself to determine how to measure the entangled particles.

“So they’re trying to test whether these particles have some kind of connection, but at the same time they’re assuming there’s no connection between the particle that decides how to measure and the particle that gets measured,” Mitchell told me on the phone.

In other words, the freedom of choice becomes the ‘local hidden variable’ that explains the results obtained during the test. This would then invalidate the results of the experiment, since it’s kind of like allowing a student to write their own test questions.


Mitchell explained the problem to me by comparing it to a doctor studying the effects of a new medicine. For the trial of the medicine to be accurate there needs to be a control group, so trial participants would be divvied up into two groups, one of which will be administered the drug and the other which will not receive the drug. Some of these trial participants may have the disease the drug is meant to cure while the others are perfectly healthy.

When the doctor has to decide which participants to put in one group, they may end up unconsciously introducing bias into the selection, such as by placing all the sick participants in one group and all the healthy participants into the other. This would really skew the results of the experiment and result in a misleading picture of the drug’s effects. To eliminate this bias, the doctor might divide the groups using a source of randomness, such as flipping a coin or rolling dice.

The situation is similar for physicists measuring quantum systems in the sense that they may introduce bias into their measurements by choosing to measure a quantum system one way rather than another. To counter this bias, Mitchell told me, they must also introduce a source of randomness into their measurement selection. Yet unlike the doctor, the physicist can’t simply flip a coin to eliminate the freedom-of-choice loophole since that coin might physically influence the system being measured in ways not realized by the physicist. The source of randomness must be sought elsewhere.

Read More: These Researchers Took Radioactive Material From Chernobyl in a Plane to Get a Secure Source of Randomness for the Zcash Blockchain

“We want to get away entirely from physics to decide how to measure the particles,” Mitchell said. “We have to replace it with something else. The thing we think is least likely to be correlated with these particles is human beings. We don’t think some particle is determining what the person chooses to do.”

This was the idea behind the Big Bell Test, which had over 100,000 people play a game on their phone in order to introduce randomness into the measurements of quantum systems being performed at 12 labs around the world. In this game, users would try to create random strings of bits (that is, 1s and 0s). A machine learning algorithm would take the player’s input and try to guess what number the player was going to guess next as a way to measure the randomness of the string. (You can try the game for yourself here.)

All of the bits generated by the players—about 90 million bits in total—were then relayed to the quantum labs which used them as inputs to decide how they were going to measure their quantum systems. Since the pattern of these bits couldn’t be guessed in advance, they were an effective source of randomness for the 12 Bell tests, which means the results of these tests weren’t subject to the freedom-of-choice loophole.

At the ICFO, Mitchell and his colleagues ran two different experiments whose results were published in Nature. One experiment entangled a single photon with a cloud of millions of atoms, which acted as a “quantum memory.” The cloud of atoms stored the entangled state and later transferred this state to a single photon. Each of the photons were then measured by a devices whose settings were determined by the input from the people playing the game. In the second experiment, two different colored photons were entangled and measured using “electro-optic modulators” whose settings were determined by the bit strings generated in the game.

In both Bell tests, the results “clearly” contradicted Einstein’s theory of local realism once again. It was the first time a Bell test wasn’t subject to the freedom-of-choice loophole. According to Mitchell, however, there are still other loopholes that need to be closed, but the size of this test has effectively shut the book on this particular loophole for good.