$\begingroup$

If you create a Cayley graph of a group, is that mathematics? What if you simulate the orbit of an element from the group? Where do mathematical models cease to become mathematics? The natural numbers still behave the way they do in an accurate model of them when the model isn't being used by any person. The sole purpose of a computer is to model mathematical objects.

The goal of computer science is to construct or describe a model of mathematical objects, even if there's no implementation of them on any computer or way to get the result that the program is to compute.

Algorithms, ironically, are the aspect of computer science that has the most influence on other fields of mathematics.

A simple case is if you determine an algorithm's time complexity to be too great to actually implement, that is still a contribution to computer science. On the other hand, if you show two constructions are equivalent, say the clique decision problem and factoring large numbers, and there's an algorithm to do one of those things which is of lower time complexity, there must be an algorithm of that same complexity for the second property. The question is, how much information you can avoid obtaining about one problem from the information in the solution or exhaustion of another? The time complexity of an algorithm is an invariant measure of it that gives you a feel for how hard a problem is. When two problems are not equally hard, complex, or time consuming, the harder one cannot be solved using just the easier one's solutions (without those solutions being processed by an algorithm that closes the difference in complexity in the total algorithm). Studying different problems and their time complexities, you see how the genericness of a problem relates to its difficulty to be solved in general. It's also useful to see, for example, logic gates implemented as the solution to a game of Minesweeper, because it shows you what the properties of a Turing-complete system look and feel like.

However, these things can also be deceptive: Initially it might look like you must check every permutation of a type of object to find out what the subset of those permutations satisfy a property, which indicates a hard problem if those permutations grow quickly, say, with the size of the set of permutations. However, it might have a second stage where there's a saturation of independent information and the permutations tried no longer contribute just themselves to the solution of the problem, or else are all determined by the amount of information collected. Like reaching that minimum 3 points to determine a circle.

There are also theorems that characterize the type of data that are viable members of the search space. This is sort of a Mandelbrot program - use your eyes to see what solutions to the problem look like, find a way to enforce those characteristics, and show that they hold for all possible solutions. A good example of this is with projective planes, where the incidence diagrams for finite planes don't have enough symmetry to decide whether even large groups of arrangements which form partial matches are viable pieces of the incidence diagram or not, leading most algorithms to require orders of magnitude beyond the age of the universe to determine the projective planes of a given order, and even those that have managed to find a second stage require massive amounts of searching and weaving of the data together over years of actual runtime to come to a conclusion. The picture I'm painting here is not a success story, it's the reflection of a real lack of understanding of what incidence diagrams for projective planes are like, what rules govern them. An understood object should have an algorithm from the examination of which one can understand the difference between that object and pure noise, a strategy which works well enough to make guesses about the nature of the solutions. So are projective planes' incidence diagrams pure noise? Looking at the incidence diagrams for small planes, they look very distinctive, so the suggestion is that the typical plane looks nothing like any known plane. But in fact, that overall pattern can be induced in a binary matrix, and is shown to hold in planes in general, which means it's possible, starting from that characterization, to greatly reduce the problem's search space. Hence, characterizing the search space of a hard problem is a lot like characterizing the object itself through invariant properties.

I would say that problems relate to their complexity in much the same way as the positions of elements of an infinite set of natural numbers relates to the set's density.

But what is an algorithm? What is a program?

I refer you to Wikipedia for Martin-Loef type theory and Calculus of Constructions for some specific implementations of computation. For full coverage, Practical Foundations for Programming Languages (Harper). For a treatment of domain theory I refer you to Domain Theory in Logical Form (Abramsky).

One answer, given in domain theory by Scott domains, is that the logical structure of a program, as a space of properties and inferences, is like a lattice of subalgebras or normal subalgebras of an abstract algebra that doesn't need a head or abstract algebra containing them all, just a forked or flawed crystal converging to the space where it would be, and that to formulate recursive definitions of program logic is to find the fixed points of the endomorphisms of this order structure, which are equivalent to continuous maps onto itself of a topological space derived from this unfinished lattice. The Stone dual of this is the executed program operating according to this logic, which is a locale whose points are the algorithms. In actual domain theory you have to generalize this a bit, because Scott domains don't form a cartesian closed category, meaning they aren't enough like a topos for programs represented by Scott domains to be arbitrarily expressive.

All of this is a buried version of Lawvere's famous statement that there's a Galois connection between syntax and semantics. More specifically, between a theory and models of the theory, or between a logic and the space of computations it performs (denotational semantics).

On the other hand, there is the relationship between constructions of an object and proofs of a theorem. I don't know if this is the exact reasoning, but if you think about it if propositions can be converted to existential, simultaneously holding properties, then in aggregate they describe a kind of object whose definition is the thing which simultaneously holds those properties, so that to prove those propositions true is to prove the type is occupied by some actual thing. The Curry-Howard correspondence is a proof that proofs are equivalent to programs or constructions, and more generally that intuitionistic logics correspond to typed lambda calculi, and thence to cartesian closed categories (which agrees with how, for logicians, a topos is just the categories of sets for an intuitionistic logic). This affects language design quite a bit, because it provides a means of computing with proofs instead of algorithms. It is the basis for the philosophy of Homotopy Type Theory (Univalent Foundations Program), as well as much of intuitionistic logic and computer science.

There is some degree of interplay between linear logic (i.e. noncommutative geometry) and computer science here. Physics, Topology, Logic and Computation: A Rosetta Stone (Baez, Stay) shows that if you replace cartesian closed categories with closed monoidal categories, you can generalize the Curry-Howard isomorphism to get all sorts of wonderful quantum behavior and semantics. Stone duality features again in the study of Chu spaces, which are a model of linear logic that is ultimately pretty similar to Domain theory in effect.

So, if all this applies to intuitionistic logic, what's classical logic? It turns out certain implementations of continuation passing, like call/cc from Scheme, as well as introducing control flow passing or procedural programming into purely functional language like Haskell amounts to making it a nonconstructive logic. The thing that makes Haskell so bothersome (many say outright useless) is that you can't make a program communicate with the outside world or depend on outside communication, even runtime-dependent parameters, without full-stop breaking the format of the language to talk in procedures with the so-called IO monad, or else destroying all the built-in logic that verifies your program behavior. So the lesson is that non-constructive mathematics is like an interactive program, and constructive mathematics is like a library.