(in which I bring this blog back to the “safe, uncontroversial” territory of arguing with people who think they can solve NP-complete problems in polynomial time)

A few people have asked my opinion about “memcomputing”: a computing paradigm that’s being advertised, by its developers, as a way to solve NP-complete problems in polynomial time. According to the paper Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, memcomputing “is based on the brain-like notion that one can process and store information within the same units (memprocessors) by means of their mutual interactions.” The authors are explicit that, in their view, this idea allows the Subset Sum problem to be solved with polynomial resources, by exploring all 2n possible subsets in parallel, and that this refutes the Extended Church-Turing Thesis. They’ve actually built ‘memcomputers’ that solve small instances of Subset Sum, and they hope to scale them up, though they mention hardware limitations that have made doing so difficult—more about that later.

A bunch of people (on Hacker News, Reddit, and elsewhere) tried to explain the problems with the Subset Sum claim when the above preprint was posted to the arXiv last year. However, an overlapping set of authors has now simply repeated the claim, unmodified, in a feature article in this month’s Scientific American. Unfortunately the SciAm article is behind a paywall, but here’s the relevant passage:

Memcomputing really shows advantages when applied to one of the most difficult types of problems we know of in computer science: calculating all the properties of a large series of integers. This is the kind of challenge a computer faces when trying to decipher complex codes. For instance, give the computer 100 integers and then ask it to find at least one subset that adds up to zero. The computer would have to check all possible subsets and then sum all numbers in each subset. It would plow through each possible combination, one by one, which is an exponentially huge increase in processing time. If checking 10 integers took one second, 100 integers would take 1027 seconds—millions of trillions of years … [in contrast,] a memcomputer can calculate all subsets and sums in just one step, in true parallel fashion, because it does not have to shuttle them back and forth to a processor (or several processors) in a series of sequential steps. The single-step approach would take just a single second.

For those tuning in from home: in the Subset Sum problem, we’re given n integers a 1 ,…,a n , and we want to know whether there exists a subset of them that sums to a target integer k. (To avoid trivializing the problem, either k should be nonzero or else the subset should be required to be nonempty, a mistake in the passage quoted above.)

To solve Subset Sum in polynomial time, the basic idea of “memcomputing” is to generate waves at frequencies that encode the sums of all possible subsets of a i ‘s, and then measure the resulting signal to see if there’s a frequency there that corresponds to k.

Alas, there’s a clear scalability problem that seems to me to completely kill this proposal, as a practical way of solving NP-complete problems. The problem is that the signal being measured is (in principle!) a sum of waves of exponentially many different frequencies. By measuring this wave and taking a Fourier transform, one will not be able to make out the individual frequencies until one has monitored the signal for an exponential amount of time. There are actually two issues here:

(1) Even if there were just a single frequency, measuring the frequency to exponential precision will take exponential time. This can be easily seen by contemplating even a moderately large n. Thus, suppose n=1000. Then we would need to measure a frequency to a precision of one part in ~21000. If the lowest frequency were (say) 1Hz, then we would be trying to distinguish frequencies that differ by far less than the Planck scale. But distinguishing frequencies that close would require so much energy that one would exceed the Schwarzschild limit and create a black hole! The alternative is to make the lowest frequency slower than the lifetime of the universe, causing an exponential blowup in the amount of time we need to run the experiment.

(2) Because there are exponentially many frequencies, the amplitude of each frequency will get attenuated by an exponential amount. Again, suppose that n=1000, so that we’re talking about attenuation by a ~2-1000 factor. Then given any amount of input radiation that could be gathered in physical universe, the expected amount of amplitude on each frequency would correspond to a microscopically small fraction of 1 photon — so again, it would take exponential time for us to notice any radiation at all on the frequency that interests us (unless we used an insensitive test that was liable to confuse that frequency with many other nearby frequencies).

What do the authors have to say about these issues? Here are the key passages from the above-mentioned paper:

all frequencies involved in the collective state (1) are dampened by the factor 2-n. In the case of the ideal machine, i.e., a noiseless machine, this would not represent an issue because no information is lost. On the contrary, when noise is accounted for, the exponential factor represents the hardest limitation of the experimentally fabricated machine, which we reiterate is a technological limit for this particular realization of a memcomputing machine but not for all of them …

In conclusion we have demonstrated experimentally a deterministic memcomputing machine that is able to solve an NP-complete problem in polynomial time (actually in one step) using only polynomial resources. The actual machine we built clearly suffers from technological limitations due to unavoidable noise that impair [sic] the scalability. This issue can, however, be overcome in other UMMs [universal memcomputing machines] using other ways to encode such information.

The trouble is that no other way to encode such information is ever mentioned. And that’s not an accident: as explained above, when n becomes even moderately large, this is no longer a hardware issue; it’s a fundamental physics issue.

It’s important to realize that the idea of solving NP-complete problems in polynomial time using an analog device is far from new: computer scientists discussed such ideas extensively in the 1960s and 1970s. Indeed, the whole point of my NP-complete Problems and Physical Reality paper was to survey the history of such attempts, and (hopefully!) to serve as a prophylactic against people making more such attempts without understanding the history. For computer scientists ultimately came to realize that all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered, exactly like all proposals for perpetual-motion machines smuggle the entropy increase somewhere that isn’t being explicitly considered. The problem isn’t a practical one; it’s one of principle. And I find it unfortunate that the recent memcomputing papers show no awareness of this story.

(Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we can’t articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing. With quantum computing the tables are turned, with the skeptics forced to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law. But as readers of this blog well know, quantum computing doesn’t seem to promise the polynomial-time solution of NP-complete problems, only of more specialized problems.)