Update on Deolalikar’s Proof that P≠NP

An update on the P not equal to NP proof

Timothy Gowers, Gil Kalai, Ken Regan, Terence Tao, and Suresh Venkatasubramanian are some of the top people who are trying to unravel what is up with the claimed proof that P NP. I will call them the Group today—I hope that is okay. It’s a pretty powerful collection of top theorists, all who want to help us understand what Vinay Deolalikar has done.

Today I will talk about the Group’s effort to help all of us understand the claimed proof of Vinay Deolalikar. Of course many others are working towards the same goal, but the Group is one that I have been interacting with the most.



This came about when some of the Group suggested that we start to be more organized and use wiki technology to handle the serious public discussion needed to understand the proof. I completely agree. One hold-up is we are currently unsure what Vinay wishes to happen. We are in direct communication with him, but he has not yet made it clear what he wishes.

The Group has decided for now to let things go along in the current ad hoc fashion. If and when Vinay wants help, we are prepared to help in any way that we can. For now we will follow Clint Eastwood who once said:

“Don’t just do something, stand there.”

Well it is hard to just stand there, since I cannot resist from making some suggestions that may help frame the discussion, and hopefully help us help Deolalikar. We must keep the tone constructive and supportive—the Group all agrees with this. I think that we need more people working on hard problems, and we should be as constructive as possible.

As I said, we have been communicating with Vinay, who is occupied with revisions to answer queries that have been made—and nothing should be inferred from periodic changes in links and drafts and wording on his HP sites.

Whatever happens I believe that the community should be excited and pleased that a serious effort has been made. I hesitate to make parallels, but recall that Andrew Wiles’ first proof fell apart. He needed the help of Richard Taylor and another year to get his final wonderful proof of Fermat’s Theorem.

Suggestion One

I have been trying to figure out what Vinay’s paper is doing. I wonder if the following analogy is helpful. Consider a much weaker notion of computation we’ll call uniform tree computations (UTC)–those familiar with complexity will recognize this as , but not yet specifying the uniformity condition.

For those who are not complexity experts this is just the class of computations that can be computed by a special class of boolean formulas. A formula is easily represented as a tree. For example, the arithmetic formula is just the tree

The same happens with boolean formulas. Suppose the input to the formula is

where each is a single boolean bit. Then, a UTC is described by a tree: at each leaf is a single input variable or its negation; at each internal node is a boolean operation. The operations are of two kinds: the first is an AND, which is true only if all the inputs to it are true; the second is an OR, which is true if one or more of the inputs is true. We use the symbol for AND; and the symbol for OR.

Further, the tree is restricted in two ways. First, the size of the tree is a fixed polynomial in the number of input variables, . Second, the depth of the tree is a fixed constant. There is a third that I will skip—it concerns what “uniform” means, but forget about that for now.

The power of this class of computation comes from the ability of the operations to have many inputs from other nodes of the tree. The weakness of this class is due to the polynomial size restriction and the constant depth.

I claim that SAT cannot be computed by a UTC. This follows from known hardness theorems. What I am interested in is can Vinay use his technology to give a new proof of this theorem. If I am following his current write-up such a proof would have to go something like this. Suppose that a UTC could compute SAT. Pick two distributions and on the SAT inputs. The first is a distribution where the clauses are sparse enough that they are very likely to be satisfiable; the second is a denser distribution of clauses so that they are very likely to be not satisfiable.

Then, Vinay’s technology should be able to show that UTC is too weak to tell these two distributions apart. If he could do that, then he would have a new proof that shows that SAT cannot be computed by this weak model of computation. This would give us some hope that he could solve the more complex problem he needs to prove his main theorem.

Suggestion Two

The next two suggestions are due to Ken Regan. Vinay can perhaps use his technology to prove a much weaker separation result, but one that is wide open. He could try to separate nondeterministic logspace (NL) from NP. The point is NL is defined by first order formulas and transitive closure. This is a much simpler notion than fixed points. Yet a proof that

would be a major breakthrough.

I suggest this to Vinay as a possible way to give us some insight into his ideas, and perhaps allow him to simplify some technical issues. If this is too much of a detour, I understand. But it could be a useful “warm-up” exercise. We see this type of idea all the time in new proofs. If you can show that a new proof can solve weaker questions, even questions that are known, that can be very confidence building.

Suggestion Three

This is to clarify how essential is the use of uniformity. This is used in the FO(LFP) characterization of P, but on current appearances, not as strongly as in proofs by diagonalization where one has a recursive-enumeration of polynomial-time machines. Various kinds of induction are involved, but maybe not an induction over (first-order “bodies” of) FO(LFP) formulas that would be tantamount to such an enumeration. The FO(LFP)formula seems to be treated as a single object, which opens a door for asking, what about it cannot be extended to talk about a nonuniform sequence of objects each with the same limitations on size and locality?

Several commenters have voiced expectation that non-uniform lower bounds should come out of the proof’s basic strategy. If so, then Vinay could try to state and prove a concrete circuit size lower bound. Here it takes even less than NL or L to be earth-shaking: a bound above would be new. However, this would also bring the Razborov-Rudich barrier into play, which the author expressly references and must avoid.

Let’s come at this point in a different way. Vinay’s use of conditional independence seems strikingly new. One of Ken’s colleagues even remarked in department email that you’d expect to see that in a statistical computer vision course, not in complexity proof. However, it seems to fit in well with concepts and results involving distributions that underly hardness-versus-randomness and other pivotal developments in complexity, including said barrier.

In particular, it seems that Deolalikar’s development can be used to create a new family of potential hardness predicates on Boolean functions that may surmount Natural Proof barrier. For example, if grows sufficiently fast, then a predicate saying a certain slice of the solution space gives rise to a distribution that cannot be represented with “factors” might work. Note that Ketan Mulmuley and Milind Sohoni’s attack on (or permanent versus determinant) can be read as involving hardness predicates from algebraic geometry and group-representation theory, so the idea is not out of the question.

This may be getting ahead of the current discussion, so let’s come back to suggesting that the use of uniformity should be scrutinized and clarified. If it is not needed, then the issues about FO(LFP) could be dispensed with. Another win is this could help Vinay further develop the new ideas that come out right away in his chapters 1–2.

Open Problems

Do these ideas help Vinay? Do they help us understand him? I hope so.

P.S. I hope these discussions are helpful to the community at large.

[5.5n –> 5n for AND-OR(-NOT) circuit lower bound; the “5.5n” source was including the input gates]