Crazy numbers on the C64

At college I used my trusty C64 to help me do the strangest mathematical calculations, like those in the article on artificial intelligence on the C64.

It was at that point as well, that I started to notice something really strange on the C64: it couldn’t calculate properly! Well, that may be a bit harsh, but I had to adapt many of my early mathematical programs to take into consideration some really strange rounding errors.

To make things worse, these strange errors didn’t just appear in the middle of a hugely complex equation, but also in simple trivial mathematical calculations. Just try for instance the simple calculation of 9 to the power 2 or calculation of the square root of 10 to the power 2, minus 10.

Every kid knows that the fist should yield 81 and the second 0, but the C64 thinks otherwise as you can see below.

To matters worse, you could end up in programming “hell” if you didn’t take this strange C64 logic into account, as your program would run forever trapped in an infinite loop. Want to try it for yourself? Well just look at the 2 screenshots below and try the code for yourself.

In the first one, the C64 will “forget” to print the final result, i.e. “1” and in the second, you will have to press RUN/STOP as the program will go on forever (which is strange, as the simple code looks perfectly OK and after a couple of iterations, we should jump out of the loop as X = 1 will be reached. Again, the C64 seems to think differently.

The reason for this lies with the internal logic your breadbox uses when it needs to convert decimal (or floating-point) numbers into the binary equivalent.

The logic is twofold. First up, in the real world, you have an infinite number of decimal values between two integer numbers. For instance, between 0 and 1, you can think of basically any possible number that would fit in between, like 0.000001 or 0.989898989898989 and so on. The problem is of course, that you simply cannot ask your computer, which thinks in binary numbers, to store an infinite amount of possible numbers. The computer, depending on how many bits it can allocate, will have to “round” things up in order to be able to store every possible representation of all the numbers in the world, which is infinite, into its finite number of bits.

So, the C64 provides you with, according the C64 Programmer’s Reference (Chapter 1, page 5) with 5 bytes of memory. So that’s not that much to store basically any possible number…

The second part of the process to convert floating-point to binary and back of course is the algorithm used to perform these calculations.

The Basic compiler of the C64 was written by Microsoft and utilizes “Excess 128” as it’s algorithm for the decimal conversion. This algorithm worked well with 8 bit CPU’s like the 6510 because the conversion could be carried out with simple binary shifts and rotates – operations that were relatively quick on these processors. The alternative algorithms were best implemented on 16 bit processors, where integer multiply and divide operations were common and hence could utilize these more robust calculations with higher accuracy.

Let’s now take a closer look at how the algorithm, used on the 5 bytes of memory causes the calculation errors shown above. We’ll take the little program that adds 0.1 to X until it reaches 1. We would assume that after 10 iterations the program would end but as can be seen, it just went on and on until RUN/STOP was pressed. A look at the value of X shows that the looping continued well beyond the value 1 and more bizarre, X contained a strange decimal value.

To understand this, we need to see how the decimal value “0.1” that we’re adding to X is stored in binary format using the Excess 128 algorithm:

0 01111011 10011001100110011001101

This is how it’s stored, with the first bit representing whether it is a positive or negative value (i.e. 0 = positive, 1 = negative).

The next 8 bits are used for the exponent (I’ll get to that later) and the following 23 bits for the actual value. For that last part, there is a strange thing in this algorithm and that is that there is always a 24th bit “assumed” but never stored. This “virtual” bit is always 1 and has an implied decimal point, so what will actually be used is:

0 01111011 1.10011001100110011001101

Still with me? OK, let’s continue.

The value of the exponent part (01111011) is 0x128 + 1×64 + 1×32 + 1×16 + 1×8 + 1×2 + 1×1 or 123. The Excess 128 algorithm now subtracts 127 from this value, leaving us with -4.

This value is used as the exponent in the formula 1.10011001100110011001101 x 2^(-4). It basically means shifting that decimal point 4 places to the left (a positive exponent shifts to the right for that matter).

This results in 0.000110011001100110011001101. Let’s calculate out binary (note that values after the decimal point are always divided by 2, so the first spot after the decimal point is 0.5, the next is 0.25 and so on).

This gives us 0x0.5 + 0x0.25 + 0x0.125 + 1×0.0625 + 1×0.03125 + … which gives us a value of 0.100000001, which is close to 0.1 but off slightly. It is this small error that basically causes our small program to fail, as we’ll run passed the check value and continue on into infinity.

So, it’s both the combination of the available memory and decimal to binary algorithm that’s at the heart of these strange calculation errors as demonstrated above.

Knowing this, you could work around this by for instance using round formulas on the calculated results or turn to assembly language and utilize something called compressed BCD (Binary Coded Decimal) routines.

BCD has the advantage of exactly converting the decimal numbers to the machine format and back. However, it takes more bytes to represent a given number, and BCD multiplication and division, as well as transcendental functions, tend to execute more slowly than their binary equivalents, which makes the whole calculation process on your C64 slower… but that’s something for another article.