The professor began my undergrad number theory class by drawing a distinction between algebra and analysis, two major themes in mathematics. This distinction has been discussed elsewhere, and seems to be rather slippery (to mathematicians at least, because it evades precise definition). My professor seemed to approach it from a synesthetic perspective — it’s about the feel of it. Algebra is rigid, geometric (think polyhedra) , perfect. The results are beautiful, compact, and eternal. By contrast, analysis is messy and malleable. Theorems have lots of assumptions which aren’t always satisfied, but analysts use them anyway and hope (and check later) that the assumptions really do hold up. Perelman’s famous proof of Poincare’s conjecture, as I understand, is essentially an example of going back and checking analytic assumptions. Analysis often makes precise and works with the notion of “good enough” — two things don’t have to be equal, they only need to converge toward each other with a sufficiently small error term.

I have been thinking about this distinction in the realm of programming. As a Haskell programmer, most of my focus is in an algebraic-feeling programming. I like to perfect my modules, making them beautiful and eternal, built up from definitions that are compact and each obviously correct. I take care with my modules when I first write them, and then rarely touch them again (except to update them with dependency patches that the community helpfully provides). This is in harmony with the current practice of denotative programming, which strives to give mathematical meaning to programs and thus make them easy to reason about. This meaning has, so far, always been of an algebraic nature.

What a jolt I felt when I began work at Google. The programming that happens here feels quite different — much more like the analytic feeling (I presume — I mostly studied algebraic areas of math in school, so I have less experience). Here the codebase and its dependencies are constantly in motion, gaining requirements, changing direction. “Good enough” is good enough; we don’t need beautiful, eternal results. It’s messy, it’s malleable. We use automated tests to keep things within appropriate error bounds — proofs and obviously-correct code would be intractable. We don’t need perfect abstraction boundaries — we can go dig into a dependency and change its assumptions to fit our needs.

Much of the ideological disagreement within the Haskell community and between nearby communities happens across this line. Unit tests are not good enough for algebraists; proofs are crazy to an analyst. QuickCheck strikes a nice balance; it’s fuzzy unit tests for the algebraist. It gives compact, simple, meaningful specifications for the fuzzy business of testing. I wonder, can we find a dual middle-ground? I have never seen an analytic proof of software correctness. Can we say with mathematical certainty that our software is good enough, and what would such a proof look like?

UPDATE: Here’s a lovely related post via Lu Zeng. Algebra vs. Analysis predicts eating corn?