$\begingroup$

Basic Levels:

Let's look at things at the simplest, and most basic level.

For math, we have:

2+3=5

I learned about that when I was very, very young. I can look at the most basic elements: two objects, and three objects. Great.

For computer programming, most people tend to use a high-level language. Some high-level languages can even "compile" into one of the lower high-level languages, like C. C can then be translated into Assembly language. Assembly language then gets converted into machine code. A lot of people think the complexity ends there, but it doesn't: Modern CPUs take the machine code as instructions, but then run "micro code" to actually execute those instructions.

This means that, at the most basic level (dealing with the simplest of structures), we are now dealing with micro-code, which is embedded in the hardware and which most programmers don't even use directly, nor update. In fact, not only do most programmers not touch micro code (0 levels higher than micro code), most programmers don't touch machine code (1 level higher than micro code), nor even Assembly (2 levels higher than micro code) (except, perhaps, for a bit of formal training during college). Most programmers will spend time only 3 or more levels higher.

Furthermore, if we look at Assembly (which is as low level as people typically get), each individual step is typically understandable by people who have been trained and have the resources to interpret that step. In this sense, Assembly is much simpler than a higher level language. However, Assembly is so simple that performing complex tasks, or even mediocre tasks, is very tedious. Upper-level languages free us from that.

In a law about "reverse engineering", a judge declared that even if code can theoretically be handled one byte at a time, modern programs involve millions of bytes, so some sorts of records (like copies of code) must be made just for such an effort to be feasible. (Therefore internal development wasn't considered a violation of the generalized "no making copies" rule of copyright law.) (I'm probably thinking of making unauthorized Sega Genesis cartridges, but may be thinking of something said during the Game Genie case.)

Modernization:

Do you run code meant for 286s? Or do you run 64-bit code?

Mathematics uses fundamentals that extend back for millennia. With computers, people typically consider investment in something two decades old to be uselessly wasteful of resources. That means Mathematics can be a lot more thoroughly tested.

Standards of Used Tools:

I was taught (by a friend who had more formal computer programming training than myself) that there is no such thing as a bug-free C compiler that meets the C specifications. This is because the C language basically assumes the possibility of using infinite memory for the purpose of a stack. Obviously, such an impossible requirement had to be deviated from when people tried to make usable compilers that worked with actual machines that are a bit more finite in nature.

In practice, I have found that with JScript in Windows Script Host, I've been able to accomplish a lot of good using objects. (I like the environment because the toolset needed to try new code is built into modern versions of Microsoft Windows.) When using this environment, I've found that sometimes there is no easily-findable documentation on how the object works. However, using the object is so beneficial, that I do so anyway. So what I'd do is write code, which may be buggy as a hornet's nest, and do so in a nicely sandboxed environment where I can see the effects, and learn about the object's behaviors while interacting with it.

In other cases, sometimes only after I've figured out how an object behaves, I've found that the object (bundled with the operating system) is buggy, and that it is a known issue that Microsoft has intentionally decided will not be fixed.

In such scenarios, do I rely on OpenBSD, created by masterful programmers that create new releases on-schedule, on a regular basis (twice a year), with a famous security record of "only two remote holes" in 10+ years? (Even they have errata patches for less severe issues.) No, by no means. I don't rely on such a product with such higher quality, because I'm working for a business that support businesses that supply people with machines that use Microsoft Windows, so that is what my code needs to work on.

Practicality/usability require that I work on the platforms that people find useful, and that is a platform which is famously bad for security (even though tremendous improvements have been made from the early days of the millennium which the same company's products were much worse).

Summary

There are numerous reasons why computer programming is more error prone, and that is accepted by the community of computer users. In fact, most code is written in environments which will not tolerate error-free efforts. (Some exceptions, such as developing security protocols, may receive a bit more effort in this regard.) Besides the commonly thought of reasons of businesses not wanting to invest more money, and miss artificial deadlines to make customers happy, there is the impact of the march of technology which simply states that if you spend too much time, you will be working on an obsolete platform because things do change significantly within a decade.

Offhand, I can recall being surprised at just how short some very useful and popular functions were, when I saw some source code for strlen and strcpy. For instance, strlen may have been something like "int strlen(char *x){char y=x;while ((y++));return (y-x)-1;}"

However, typical computer programs are much lengthier than that. Also, a lot of modern programming will use other code which may be less thoroughly tested, or even known to be buggy. Today's systems are much more elaborate than what can easily be thought through, except by hand-waving away a lot of the minutia as "details handled by lower levels".

This mandatory complexity, and the certainty of working with complex and even wrong systems, makes computer programming a lot hardware to verify than a lot of mathematics where things tend to boil down to a lot simpler levels.

When you break things down in mathematics, you get to individual pieces that children can understand. Most people trust math; at least basic arithmetic (or, at least, counting).

When you really break down computer programming to see what's happening under the hood, you end up with broken implementations of broken standards and code that is ultimately executed electronically, and that physical implementation is just one step below microcode which most university-trained computer scientists don't dare touch (if they are even aware of it).

I've spoken with some programmers who are in college or recent graduates who outright object to the notion that bug-free code can be written. They've written off the possibility, and though they acknowledge that some impressive examples (which I have been able to show) are some convincing arguments, they consider such samples to be unrepresentative rare flukes, and still dismiss the possibility of being able to count on having such higher standards. (A much, much different attitude than the much more trustable foundation we see in math.)