$\begingroup$

Recently, when talking to a physicist, I claimed that in my experience, when a problem that naively seems like it should take exponential time turns out nontrivially to be in P or BPP, an "overarching reason" why the reduction happens can typically be identified---and almost always, that reason belongs to a list of a dozen or fewer "usual suspects" (for example: dynamic programming, linear algebra...). However, that then got me to thinking: can we actually write down a decent list of such reasons? Here's a first, incomplete attempt at one:

(0) Mathematical characterization. Problem has a non-obvious "purely-mathematical" characterization that, once known, makes it immediate that you can just do exhaustive search over a list of poly(n) possibilities. Example: graph planarity, for which an O(n6) algorithm follows from Kuratowski's theorem.

(As "planar" points out below, this was a bad example: even once you know a combinatorial characterization of planarity, giving a polynomial-time algorithm for it is still quite nontrivial. So, let me substitute a better example here: how about, say, "given an input n written in binary, compute how many colors are needed to color an arbitrary map embedded on a surface with n holes." It's not obvious a priori that this is computable at all (or even finite!). But there's a known formula giving the answer, and once you know the formula, it's trivial to compute in polynomial time. Meanwhile, "reduces to excluded minors / Robertson-Seymour theory" should probably be added as a separate overarching reason why something can be in P.)

Anyway, this is specifically not the sort of situation that most interests me.

(1) Dynamic programming. Problem can be broken up in a way that enables recursive solution without exponential blowup -- often because the constraints to be satisfied are arranged in a linear or other simple order. "Purely combinatorial"; no algebraic structure needed. Arguably, graph reachability (and hence 2SAT) are special cases.

(2) Matroids. Problem has a matroid structure, enabling a greedy algorithm to work. Examples: matching, minimum spanning tree.

(3) Linear algebra. Problem can be reduced to solving a linear system, computing a determinant, computing eigenvalues, etc. Arguably, most problems involving "miraculous cancellations," including those solvable by Valiant's matchgate formalism, also fall under the linear-algebraic umbrella.

(4) Convexity. Problem can be expressed as some sort of convex optimization. Semidefinite programming, linear programming, and zero-sum games are common (increasingly-)special cases.

(5) Polynomial identity testing. Problem can be reduced to checking a polynomial identity, so that the Fundamental Theorem of Algebra leads to an efficient randomized algorithm -- and in some cases, like primality, even a provably-deterministic algorithm.

(6) Markov Chain Monte Carlo. Problem can be reduced to sampling from the outcome of a rapidly-mixing walk. (Example: approximately counting perfect matchings.)

(7) Euclidean algorithm. GCD, continued fractions...

Miscellaneous / Not obvious exactly how to classify: Stable marriage, polynomial factoring, membership problem for permutation groups, various other problems in number theory and group theory, low-dimensional lattice problems...

My question is: what are the most important things I've left out?

To clarify:

I realize that no list can possibly be complete: whatever finite number of reasons you give, someone will be able to find an exotic problem that's in P but not for any of those reasons. Partly for that reason, I'm more interested in ideas that put lots of different, seemingly-unrelated problems in P or BPP, than in ideas that only work for one problem.

I also realize that it's subjective how to divide things up. For example, should matroids just be a special case of dynamic programming? Is solvability by depth-first search important enough to be its own reason, separate from dynamic programming? Also, often the same problem can be in P for multiple reasons, depending on how you look at it: for example, finding a principal eigenvalue is in P because of linear algebra, but also because it's a convex optimization problem.

In short, I'm not hoping for a "classification theorem" -- just for a list that usefully reflects what we currently know about efficient algorithms. And that's why what interests me most are the techniques for putting things in P or BPP that have broad applicability but that don't fit into the above list -- or other ideas for improving my crude first attempt to make good on my boast to the physicist.