A Little More on the Graph Isomorphism Algorithm



Looking at some of its components



From source, our congrats too

Laci Babai’s first talk a week ago Tuesday is now a webcast here. There is also a great detailed description of his talk by Jeremy Kun, including background on the problem and how the proof builds on Eugene Luks’s approach.

Today we talk some more about ingredients of Laci’s algorithm.

He has one more scheduled talk this coming Tuesday on its key vehicle of progress: Either the current graph (and associated groups) in the recursion can be split up for divide-and-conquer, or it embeds a special kind of graph named for Selmer Johnson that can be handled specially. This strikes us as following another Chicago idea—the Geometric Complexity Theory programme of Ketan Mulmuley and Milind Sohoni—of looking for objects that obstruct the progress of algorithms; this post by Joshua Grochow discusses an example for matrix multiplication.

Rather than try to anticipate these further details and Laci’s forthcoming preprint, we thought we might give more high-level discussion of some more ideas and elements we recognize that make the algorithm work. We could be way off, but these ideas we believe are key to his breakthrough result. In any event they are useful insights that we hope you enjoy.

All Aboard

Laci’s talk is old style, a chalk talk. Most talks these days are powerpoint-style, but for details and complex mathematics, there is nothing like seeing a master at work using chalk. The virtues of such a talk are many. It slows down the information rate to the audience. This is good. Slow is good. Flipping powerpoint slides rapidly can be hard to follow. Watching Laci write out in his handwriting:

Graph Isomorphism GI

is wonderful. You can follow it, internalize it, and perhaps understand it.

The second is the use of many boards. This is wonderful. It gives a larger state for the audience, so they are more likely to understand.

Overview

The key ideas of the algorithm for GI are really classic ones from design of algorithms. The genius is getting them all to work together. The ideas break into two types: those that are general methods from computer science and those that are special to the GI problem.

Domain-General Points

Small Steps. One of the most important principles in mathematics and also in the design of complex algorithms is that we rarely can just write down the answer. That is if we wish to construct something we rarely can just write down the answer in closed form. The quadratic formula for the roots of is a classic counterexample:

Of course there are no such formulas for high degree polynomials nor for non-polynomial equations. Enter the Newton method.

The idea of building the answer in small steps is the cornerstone of most important algorithms. Linear programming uses simplex or interior point methods, matching uses path-augmentation, and so on.

Recursion. The reason the small step idea is so powerful is that it plays well with running time. If the small steps can be bounded by a nice equation, then the algorithm in question will run fast. For Laci’s case the key is

where is a quasi-polynomial term. This type of equation yields a quasi-polynomial running time. If we had a constant in place of then we would get a term which is polynomial, and this typifies how one gets polynomial-time algorithms by divide-and-conquer. Here is not constant, but quasi-polynomial terms are closed under raising them to log powers, so that is the time we get. The recursion can be cut off when the problem size is polylog.

This-or-That. As Laci says, “split-or-Johnson.” This is the hardest element to convey simply but may pay the highest dividends if we can recognize when it can be used. Here is an example: For every there exists such that for every -vertex graph :

Either has at least triangles, or one can remove edges to leave no triangles.

This enables a randomized algorithm to distinguish graphs that are triangle-free from those that are -far from being triangle-free: it can try (say) random triples of vertices and say “probably triangle-free” if none is a triangle. The randomized primality test of Gary Miller and Michael Rabin can also be phrased as an either-or in which the “or” branch finds a factor. To be sure, Laci’s algorithm isn’t randomized.

Domain-Specific Elements

Graph Marking. Perhaps the main problem with GI has always been that looking locally at vertices of some graph may yield no information about the graph. Locally all the neighborhoods of vertices may look alike. This is terrible from the point of view of trying to tell one vertex from another, which is critical to be able to solve the GI problem.

But there is an expensive way to change this. We simply pick a set of vertices of and mark them in a unique manner. Now using these we can start to break up the structure of the graph and start to label vertices differently. Obviously a vertex near one of these marked vertices is different from one that is farther away. This is a critical “trick” that allows one to make progress on GI for the graph.

There is a cost. If we pick such special vertices in a graph, then we must try all ways to pick these in the other graph. That is expensive. If the graph has vertices then it costs . So to avoid taking more than quasi-polynomial time, we must keep of size at most poly-logarithmic. It also needs of order at least to be effective, which is a reason the technique cannot simply be improved to run in polynomial time.

Groups Are Nice. One of the most useful properties of groups is the “subgroup size trick.” In most structures, including graphs, a substructure can be very big. Suppose that is a graph on vertices. It has subgraphs with vertices—just remove any vertex. This cannot happen with groups. If is a groups with elements, then the largest nontrivial subgroup can have at most . This is a simple consequence of the famous theorem of Joseph-Louis Lagrange that for any finite group , the size of every subgroup of divides the size of . If you keep taking subgroups and each one is proper then you quickly make great progress reducing the size.

The worst case is when the factor is such as with the alternating group inside the symmetric group . Babai calls a homomorphism from a group into giant if its image is either or all of . An element of the set acts on is unaffected if restricted to the subgroup of permutations that fix is still giant. Subject to conditions about the setting that we don’t yet fully understand, his key new group-theoretic theorem is:

Unless is tiny, the restriction of a giant homomorphism to the intersection of over all unaffected elements is still giant.

This looks like it should be either easily true or easily false, but Babai’s precise statement depends on the classification of finite simple groups.

The key inner point of the algorithm is to try subsets of the elements where is polylog but not tiny so the theorem works for . The theorem helps decide in quasipolynomial time whether the resulting homomorphism into is giant. This distinguishing power in turn is used to make progress.

Open Problems

What is the full pattern of progress in the algorithm, with “split-or-Johnson” evidently directing the outermost loop and the above the inner loops? This may need more details to unfold.