The paradox of Schrodinger’s cat is a thought experiment dreamed up to explore one of the great mysteries of quantum mechanics—why we don’t see its strange and puzzling behaviour in the macroscopic world.

The paradox is simple to state. It involves a cat, a flask of poison and a source of radiation; all contained within a sealed box. If a monitor in the box detects radioactivity, the flask is shattered, releasing the poison and killing the cat.

The paradox comes about because the radioactive decay is a quantum process and so in a superposition of states until observed. The radioactive atom is both decayed and undecayed at the same time.

But that means the cat must also be in a superposition of alive and dead states until the box is open and the system is observed. In other words, the cat must be both dead and alive at the same time.

Nobody knows why we don’t observe these kinds of strange superpositions in the macroscopic world. For some reason, quantum mechanics just doesn’t work on that scale. And therein lies the mystery, one of the greatest in science.

But that mystery may now be solved thanks to the extraordinary work of Arkady Bolotin at Ben-Gurion University in Israel. He says the key is to think of Schrodinger’s cat as a problem of computational complexity theory. When he does that, it melts away.

First some background. The equation that describes the behaviour of quantum particles is called Schrodinger’s equation. It is relatively straightforward to solve for simple systems such as a single quantum particle in a box and predicts that these systems exist in a quantum superposition of states.

In principle, it ought to be possible to use Schrödinger’s equation to describe any object regardless of its size, perhaps even the universe itself. This equation predicts that the system being modelled exists in a superposition of states, even though this is never experienced in our macroscopic world.

The problem is that the equation says nothing about how large an object needs to be before it obeys Newtonian mechanics rather than the quantum variety.

Now Bolotin thinks he knows why there is a limit and where it lies. He says there is an implicit assumption when physicists say that Schrödinger’s equation can describe macroscopic systems. This assumption is that the equations can be solved in a reasonable amount of time to produce an answer.

That’s certainly true of simple systems but physicists well know that calculating the quantum properties of more complex systems is hugely difficult. The world’s most powerful supercomputers cough and splutter when asked to handle systems consisting of more than a few thousand quantum particles.

That leads Bolotin to ask a perfectly reasonable question. What if there is no way to solve Schrödinger’s equation for macroscopic systems in a reasonable period of time? “If it were so, then quantum theoretical constructions like “a quantum state of a macroscopic object” or “the wave function of the universe” would be nothing more than nontestable empty abstractions,” he says.

He then goes on to prove that this is exactly the case, with one important proviso: that P ≠ NP. Here’s how he does it.

His first step is to restate Schrödinger’s equation as a problem of computational complexity. For a simple system, the equation can be solved by an ordinary computer in a reasonable time, so it falls into class of computational problems known as NP.

Bolotin then goes on to show that the problem of solving the Schrödinger equation is at least as hard or harder than any problem in the NP class. This makes it equivalent to many other head-scratchers such as the travelling salesman problem. Computational complexity theorists call these problems NP-hard.

What’s interesting about NP-hard problems is that they are mathematically equivalent. So a solution for one automatically implies a solution for them all. The biggest question in computational complexity theory (and perhaps in all of physics, if the computational complexity theorists are to be believed), is whether they can be solved in this way or not.

The class of problems that can be solved quickly and efficiently is called P. So the statement that NP-hard problems can also be solved quickly and efficiently is the famous P=NP equation.

But since nobody has found such a solution, the general belief is that they cannot be solved in this way. Or as computational complexity theorists put it: P ≠ NP. Nobody has yet proved this, but most theorists would bet their bottom dollar that it is true.

Schrödinger’s equation has a direct bearing on this. If the equation can be quickly and efficiently solved in all cases, including for vast macroscopic states, then it must be possible to solve all other NP-hard problems in the same way. That is equivalent to saying that P=NP.

But if P is not equal to NP, as most experts believe, then there is a limit to the size the quantum system can be. Indeed, that is exactly what physicists observe.

Bolotin goes on to flesh this out with some numbers. If P ≠ NP and there is no efficient algorithm for solving Schrödinger’s equation, then there is only one way of finding a solution, which is a brute force search.

In the travelling salesman problem of finding the shortest way of visiting a number of cities, the brute force solution involves measuring the length of all permutations of routes and then seeing which is shortest. That’s straightforward for a small number of cities but rapidly becomes difficult for large numbers of them.

Exactly the same is true of Schrödinger’s equation. It’s straightforward for a small number of quantum particles but for a macroscopic system, it becomes a monster.

Macroscopic systems are made up of a number of constituent particles about equal to Avogadro’s number, which is 10^24.

So the number of elementary operations needed to exactly solve this equation would be equal to 2^10^24. That’s a big number!

To put it in context, Bolotin imagines a computer capable of solving it over a reasonable running time of, say, a year. Such a computer would need to execute each elementary operation on a timescale of the order of 10^(-3x10^23) seconds.

This time scale is so short that it is difficult to imagine. But to put it in context, Bolotin says there would be little diﬀerence between running such a computer over one year and, say, one hundred billion years (10^18 seconds), which is several times longer than the age of the universe.

What’s more, this time scale is considerably shorter than the Planck timescale, which is roughly equal to 10^-43 seconds. It’s simply not possible to measure or detect change on a scale shorter than this. So even if there was a device capable of doing this kind of calculating, there would be no way of detecting that it had done anything.

“So, unless the laws of physics (as we understand them today) were wrong, no computer would ever be able to execute [this number of] operations in any reasonable amount time,” concludes Bolotin.

In other words, macroscopic systems cannot be quantum in nature. Or as Bolotin puts it: “For anyone living in the real physical world (of limited computational resources) the Schrodinger equation will turn out to be simply unsolvable for macroscopic objects.”

That’s a fascinating piece of logic in a remarkably clear and well written paper. It also raises an interesting avenue for experiment. Physicists have become increasingly skilled at creating conditions in which ever larger objects demonstrate quantum behaviour.

The largest quantum object so far—a vibrating silicon springboard —contained around 1 trillion atoms (10^12), significantly less than Avogadro’s number. But Bolotin’s work suggests a clear size limit.

So in theory, these kinds of experiments provide a way to probe the computational limits of the universe. What’s needed, of course, is a clear prediction from his theory that allows it to be tested experimentally.

There is also a puzzle. There are well known quantum states that do contain Avogadro’s number of particles: these include superfluids, supeconductors, lasers and so on. It would be interesting to see Bolotin’s treatment of these from the point of view of computational complexity.

In these situations, all the particles occupy the same ground state, which presumably significantly reduces the complexity. But by how much? Does his approach have anything to say about how big these states can become?

Beyond that, the questions come thick and fast. What of the transition between quantum and classical states—how does that happen in terms of computational complexity? What of the collapse of stars, which are definitely classical objects, into black holes, which may be quantum ones?

And how does the universe decide whether a system is going to be quantum or not? What is the mechanism by which computational complexity exerts its influence over nature? And so on…

The computational complexity theorist Scott Aaronson has long argued that the most interesting problems in physics are intricately linked with his discipline. And Bolotin’s new work shows why. It’s just possible that computational complexity theory could be quantum physics’ next big thing.

Ref: arxiv.org/abs/1403.7686 : Computational Solution to Quantum Foundational Problems