Mathematics, logic, and computer science share a common history. Mathematics and logic are ancient dragons, and not much upheaval is to be expected in their futures. But where lies the future of computer science?

Many philosophers of the 20th century embraced a view going back, at least, to Leibniz, viz., that, by using a formal language where all assumptions are made explicit, most differences in opinion can be solved by tracing the differences back to different (implicit) assumptions. It becomes evident, by reading a newspaper, e.g., that this view, though of great potential, has not had a large impact in the domains of every-day life.

Computer science would seem to be much more fertile soil for such ideas, and this makes it all the more surprising that computer programming is an activity performed by people who, by and large, have little or no knowledge about formal languages and formal methods.[1] Since at least three decades, the computer industry is full of buzzwords with little or no real signification. How many products, packages, and technologies have not promised to empower their users beyond what is reasonable to expect?

There seems to be no evidence that the quality of newly written software is better than the quality of old software. In many application domains it even seems to be worse. This is partially explained by the increasing complexity of software and the thin spread of competent programmers: a disastrous combination. Appeals for more scientific and rigorous approaches to IT-projects have been voices crying in the desert.[2] This unfortunate development is a consequence of the human mind meeting the unrelenting computers, without sufficient respect for the complexity of the problem at hand. The obvious but tedious solution is the application of logic.

There are two approaches to logic in computer science. The first approach is to apply some logical formalism, e.g., propositional logic, to a conventional programming language. The second approach is to replace the conventional programming language with a language designed by logicians. In the former case we speak about formal methods and in the second case we speak about formal languages. The language advocated here, Martin-Löf’s type theory, is of the second kind, i.e., a formal language: type theory is not an add-on to a conventional programming language, but a programming language in its own right.[3]

Though Martin-Löf’s type theory usually is associated with functional programming, there is no real reason to limit its application to batch computing. Haskell’s monad system, and the many interactive programs written using it, shows that the style of programming dubbed functional also can handle interactive programs.[4] The long term goal of much ongoing research on Martin-Löf’s type theory is to enable programmers to write complete applications using the language of Martin-Löf’s type theory, including graphical user interfaces, web interfaces, business logic, workflows, integration components (e.g, Web Services, COM+, CORBA, etc.), system calls (Win32, POSIX, etc.), database interaction (e.g., ODBC), native code interoperability, and database schemata.

There is no royal road to software correctness. Martin-Löf’s type theory is a great tool, but we also need a method to address the quality problems of software: “Controlling complexity is the essence of computer programming”. Therefore, we, programming language designers, software architects, and computer programmers, have to face the hard facts and deal with the problems accordingly. At this point I think it is fitting to review Descartes’ method of rightly conducting the reason. In his Discours de la méthode he writes that he thought that the following four precepts are sufficient:[5]

“The first was never to accept anything for true which I did not plainly know to be such; that is to say, carefully to avoid hasty judgement and prejudice, and to include nothing more in my judgements than what presented itself to my mind so clearly and so distinctly that I had no occasion to call it in doubt.

The second, to divide each of the difficulties I would examine into as many parts as possible and as was required in order better to resolve them.

The third, to conduct my thoughts in an orderly fashion, by commencing with those objects that are simplest and easiest to know, in order to ascend little by little, as by degrees, to the knowledge of the most composite things, and by supposing an order even among those things that do not naturally precede one another.

And the last, everywhere to make enumerations so complete and reviews so general that I was assured of having omitted nothing.”

I think that these four points, conceived in 1637, form a much more realistic and rigorous ground for software correctness than, most modern project methodologies.

Descartes’ four precepts codify a mathematician’s way of approaching a problem. To my mind, there is no essential difference in how a computer programmer ought to approach a practical programming problem. Indeed, Martin-Löf’s type theory is both a foundation for constructive mathematics and for computer programming.

“In fact, I do not think that the search for high level programming languages that are more and more satisfactory from a logical point of view can stop short of anything but a language in which (constructive) mathematics can be adequately expressed”[6]

The appeal for a programming language of mathematical rigor is even more urgent today, when we develop larger and larger systems. If the confidence in software is to be maintained, the links with mathematics and logic that computer science had in its infancy must be maintained and nurtured. By its very nature, Martin-Löf’s type theory is a very important link between the three subjects.

References