Open Problems That Might Be Easy



A speculation on the length of proofs of open problems



Broad Institute source



Nick Patterson is one of the smartest people I have ever known.

Today I would like to talk about something he once said to me and how it relates to solving open problems.



Nick now works at the Broad Institute on genomic problems—especially large data sets. For decades, he worked as a cryptographer for the British, and then the U.S. He also spent many years with Renaissance Technologies, Jim Simons’s investment hedge fund.

Years ago I consulted regularly at IDA—a think tank for the NSA that was based in Princeton. I cannot tell you what we did, nor what we did not. But I can say we worked on interesting problems in the area of communication. One of the top scientists there was Nick. He was a quite strong chess player, and at tea would often play the other top player at speed chess. I once asked Nick how he would fare against the then world champion Anatoly Karpov. He answered that it would be:

Sack, sack, mate.

I never really believed this. I always thought that he was strong enough to hold out a bit, but perhaps Nick was right. Perhaps it would be over quickly. That Karpov would find a simple, short, demonstration that would wipe Nick out.

But given how smart Nick was I often wondered if he was right when we translate his statement to mathematics:

Can open problems have simple solutions?

Could some open problems fall to: “We know […] and by induction […] we are done”?

Some Short Solutions

Leonhard Euler thought long and hard about possible generalizations of Fermat’s Last Theorem. He was the first to fashion a nearly-correct proof that no cube could be a nontrivial sum of fewer than three cubes. He conjectured that no 4th-power could be a nontrivial sum of fewer than four 4th-powers, no 5th-power a sum of fewer than five 5th-powers, and so on. These cases were open for nearly two centuries, but in 1966, Leon Lander and Thomas Parker used a computer to find

It still took 20 more years for the other shoe to drop on the case:

This was the smallest member of an infinite family of solutions found by Noam Elkies—himself a master chess player—though not the smallest overall, which is

.



The shortest solution to a problem in complexity theory that was important and open for several years was:

Well this needs a little context. If either or is the identity, then this group commutator is the identity. Else in a large enough permutation group like , you can rig it so that is never the identity. Thus if you a piece of code such that if some property holds and otherwise, and likewise gives iff holds, then the code computes the AND gate .

Since it is easy to code by the code , and since AND and NOT is a complete basis, the upshot is that all -depth Boolean formulas can be computed by codes of size . Permutations of 5 elements allow codes in the form of width-5 branching programs, so what tumbles out is the celebrated theorem of David Barrington that the complexity class has constant-width branching programs. GLL’s old post on it played up the simplicity, but like many mate-in-n chess problems it was not simple to see in advance.



More recently, a seemingly complicated conjecture by Richard Stanley and Herbert Wilf about permutations was proved in an 8-page paper by Adam Marcus and Gábor Tardos. For any sequence of distinct integers define its pattern to be the unique permutation of that has the same relative ordering. For instance, the pattern of is . Given any length- pattern and define to be the set of permutations of for which no length- subsequence (not necessarily consecutive) has pattern . Whereas grows as , Stanley and Wilf conjectured that the maximum growth for is , for any fixed . Marcus and Tardos polished off a stronger conjecture by Zoltán Füredi and Péter Hajnal about matrices in basically 1.5 pages whose short lemmas read like sack, sack, mate.

We found the last example highlighted in a StackExchange thread on open problems solved with short proofs. Another we have seen mentioned elsewhere and covered in a post is Roger Apéry’s proof that is irrational, which centered on deriving and then analyzing the new identity

The Euler examples could be ascribed mostly to “computer-guided good fortune” but the other three clearly involve a new idea attached to a crisp new object: an equation or group or matrix construction. We wonder whether and when the possibility of such objects can be “sniffed out.”

Problems Ripe For Short Solutions?

I sometimes wonder if we have missed a simple proof—no, short is a better term—a short proof that would resolve one of our favorite open problems. Here a couple that Ken and I suggest may fall into this category.

Péter Frankl’s conjecture. Surely it just needs an equation or formula for the right kind of “potential function”? We noted connections to other forms in a followup post.

Freeman Dyson’s conjecture. It stands out even among number-theory statements that are “overwhelmingly probable” in random models, and there ought to be some principle for boxing away potential counterexamples along lines of how certain numbers with fast approximations can be proved to be transcendental. (Then again hasn’t yet been proved transcendental.)

Problems involving deterministic finite automata (DFAs) are often easy to solve or show to be decidable. But here is one from Jeff Shallit that is not: Given a DFA, does it accept a string that is the binary representation of a prime number—is this decidable?

Even simpler is the question, given binary strings and of length , of finding the smallest DFA that accepts but not (or vice-versa). Can we put tight bounds on the size of in terms of ?

Can a Boolean circuit of size computing a function be converted into a circuit of size with gates each computing the XOR of with ? This would be a Boolean analogue of the famous Derivative Lemma of Walter Baur and Volker Strassen, which we have discussed here and here.

Harder Ones?

By contrast, the 3n+1 conjecture of Lothar Collatz does not strike us as a candidate. John Conway proved that related forms are undecidable, and we suspect that it will need deep new knowledge of how intuitively multiplicative properties of integers duck-and-weave under the additive structure.

Regarding the Skolem problem we are on the fence: in a sense it involves DFAs which should be easy, but even for small the connections to deep approximation problems in number theory become clear.

Here are four others—what do you think?:

Can polynomials modulo compute the majority function? Of course we mean polynomials of low degree and we allow some flex on the meaning of “compute” provided it relates to polynomial-sized constant-depth circuits.

Can we at least prove that SAT cannot be computed in linear time on a Turing Machine? Note that this does not follow from the proof of as we discussed here. For all those of you who are sure that not only does , but also that ETH is true, how can we not prove this “trivial lower bound?”

Can whether an -node graph has a triangle be decided in time? Or at least better than matrix-multiplication time? Ken knew a visitor to Buffalo who admitted spending a year on this with little to show for it.

Can we solve the lonely runner problem? Or at least improve the number of runners that it is known to be true for?

Open Problems

How can we possibly forecast how open an open problem is?