Modern mathematics is enormously complicated and sophisticated. It takes some courage, and perhaps some foolishness, to dare to suggest that behind the fancy theories lie serious logical gaps, and indeed error. But this is the unfortunate reality. Around the corner, however, is a new and more beautiful mathematics, a more honest mathematics, in which everything makes complete sense! It is my job to give people glimpses of this better, more logical alternative, and to empower young people especially to not be afraid to question the status-quo and the dubious thinking that currently holds sway over the subject. My MathFoundations series of videos will investigate these problems in a systematic way; let me here at least briefly outline some of the problems, so you can get an initial idea, and so that perhaps some of you will start to think more seriously about these important issues. I will be saying a lot more about these topics in future posts.

The notion of rigour in mathematics is a difficult one to pin down. Certain historical periods accepted notions or arguments that later were deemed insufficiently precise, or even incorrect, but this often became clearer only once a more accurate way of thinking emerged. A familiar illustration is the geometry of Euclid’s Elements, which for most of the last two thousand years was considered the model for logical presentation of mathematics. Only in the nineteenth century did it become acknowledged that Euclid’s definitions of point and line were imprecise, that he implicitly used rigid motions for proofs without defining them, that intersections of circles were taken for granted, that notions of betweenness were used without being supported by corresponding definitions, that arguments by pictures were implicitly used, and that most of the three-dimensional parts of the geometry were logically unsubstantiated. In each of these cases it became possible to talk about alternative ways of thinking, due to non-Euclidean geometries, linear algebra, and the idea of geometry over finite fields. Einstein’s theory no doubt played a big role in loosening people’s conviction that Euclidean geometry was somehow God-given.

The foundations of trigonometry are also suspect as soon as one inquires carefully into the nature of an angle—a difficult concept that Euclid purposefully avoided. It requires either the notion of arc-length or area contained by a curve, and both of these require calculus. The usual pastiche of trigonometric relations depend logically on a prior theory of analysis; a point that even most undergraduates never really properly see. Indeed the very notion of a curve was problematic for seventeenth and eighteenth century mathematicians, and even to this day it is not straightforward. For example, one of the supposedly basic results about curves is the Jordan curve theorem: a simple closed curve in the plane separates the plane into two regions; but it is the rare undergraduate who can even state this result correctly, least of all prove it.

There are even surprising and serious logical gaps with first year calculus. The foundations of the “real number line” are notoriously weak, with continued confusions as to the nature of the basic objects and the operations on them. Attempts at trying to define “real numbers” in the way applied mathematicians and physicists would prefer—as decimal expansions—run into the serious problems of how to define the basic operations, and prove the usual laws or arithmetic. [Try to define multiplication between two infinite decimals, and then prove that this law is associative!] The approaches using equivalence classes of Cauchy sequences, or Dedekind cuts, suffer from an inability to identify when two “real numbers” are the same, and purposefully side-step the crucial issue of how we actually specify these objects in practice. Dedekind cuts in particular are virtually picking oneself up with one’s own boot straps, with a notable poverty of examples. The continued fractions approach, while in many ways the most enlightened path, suffers also from difficulties. The result of these ambiguities is a kind of fantasy arithmetic of real numbers, a thought-experiment floating above and beyond the reach of concrete examples and computations. Which is why the computer scientists have such a headache trying to encode these “real numbers” and their arithmetic on our computers.

The serious problems with the continuum are reflected by an attendant state of denial by our first year Calculus texts, which try to bluff their way through these difficulties by either pretending that the foundations have been laid out properly elsewhere, can be replaced by some suitable belief system dressed up using “axiomatics”, or can be glossed over by appeals to authority. The lack of examples and illustrative computations is illuminating. A challenge to those pure mathematicians who object to these claims: can you show us some explicit first year examples of arithmetic with real numbers??

The Fundamental Theorem of Algebra, a key result in undergraduate mathematics, that a polynomial of degree n has a zero in the complex plane, is almost never proved properly. While it ostensibly appears to be `proved’ in complex analysis courses, it is doubtful that this is convincing to students: after all, by the time one has studied complex analytic functions to the point of being able to apply Liouville’s theorem, who can say for sure whether one has not already used the very result one is ostensibly proving, perhaps implicitly? In fact complex analysis as laid out in undergraduate courses is very much open to criticism, and not just because of the nebulous situation with `real numbers’. Yet this crucial result (FTA) is used all the time to simplify arguments.

Closely connected with all of this is Cantor’s theory of `infinite sets’ and its current acceptance by the majority as the foundation of mathematics. The essential problem that ultimately overwhelmed Cantor is still with us: what exactly is an “infinite set”? For a long time now it has been well-known that Cantor’s initial “definition” of an infinite set was far too vague; consideration of the “set of all sets”, or the “set of all groups” or the “set of all topological spaces” are fraught with difficulty and indeed paradox. The modern attitude is to slyly substitute some other terms like “class” or “family” or “category” when possible contradictions might arise. Hopefully fellow citizens will have the decency to not raise the question of what exactly these words mean! If everyone plays along, there is no problem, right?

Other weaknesses of modern analysis arise with issues of constructability and specification. What do we actually mean when we say “Let G be a Lie group”, or “Consider the space of all analytic functions on the circle” or “Now take the nth homology group”?? Terminology is important: I have never seen a proper discussion of what the words let, consider or take actually mean in pure mathematics, despite their universal usage. Difficulties with terminology also affect the core set-up: the modern mathematician likes to frame her subject in terms not only of sets but also of functions. The latter term is almost as problematic as the former.

What precisely is a “function”? Okay, the usual definition is something like “a rule that inputs one kind of object and outputs a possibly different kind of object”. But this passes the buck from defining the term “function” to defining the term “rule”. Are we thinking about a computer program here? If so, what kind of program? What language and syntax? What conventions about how to specify a program, and how does one tell if my program defines the same “function” as your program??

The modern analyst likes to go further, and also talk about “arbitrary functions”, allowing not only those that can be described in some concrete way by an arithmetical expression or a computer program, but also all those “functions which are not of this form”. What exactly this means, if anything, is highly debatable. The lack of clear examples that can be brought to bear on such a discussion is a hint that we are chatting here about something other than mathematics. Surely a distinction ought to be drawn between “functions” which one can concretely specify and “functions” which one can only talk about. Even better would be to cease discussion about the latter entirely, or at least relegate them to philosophy!

The theoretical use of limits in calculus is generally lax. This despite all the huffing and puffing with epsilons and deltas, whose seeming precision obscures the more devious sleights of hand, of which there are many. For example, while care is often used to `prove’ the Intermediate Value Theorem (which is obvious to any engineer or physicist), the use of `limit’ in the usual definition of the Riemann integral is almost a complete cheat. Have a look at your calculus book carefully in this section, and see what I mean! Most first year students are blissfully unaware of the vast logical gaps in their courses. Most mathematicians do not go out of their way to point these out.

Of course there is much more to be said about these issues. All of them will be addressed in my MathFoundations YouTube series, but I think it useful to also begin a discussion of them here in this blog. There is another, more beautiful, mathematics waiting to be discovered, but first before we can properly see it, we need to clean out the cobwebs that currently obstruct our vision.