Fri Jun 08, 2012 4:51 pm

As a very simplistic explanation:



Performing mathematics operations on floating point numbers (decimal numbers, not whole numbers) requires a little more overhead when working with binary values. Everyone knows that data is stored in computers as 1's and 0's, each position in the number being a power of 2 greater than the previous. So to do math with a whole number its quite simple.



Software Based Math: Do the math via pen and paper

When it comes to floating point numbers the steps to perform even a simple two value addition becomes more complicated. This process was originally performed in software, requiring multiple instructions to get the needed result. When you compile with the softfloat option this is what you are doing.



Lets view this as doing some math via pen and paper. Works but kinda slow.



Hardware Based Math: Do the math via your friend's calculator

To speed up floating point math, some smart engineers came up with a Floating Point Unit (FPU) which is a piece of hardware that can take in floating point values and an operator and return a value. This hardware is optimized to just do floating point math so it performs much better than doing the operations in software.



To get the values and operator into the FPU the compiler must add some code to copy this info from your program to the hardware. Typically this is done through a function call which requires some overhead to start the call (copy values from your code into the FPU interface code) and the complete the call (clear up the memory used to do the copy). So in the softfp situation we use the FPU hardware but use the typical function calling method to move the data around. (Note: I'll explain some benefits at the end.)



In this situation your buddy has a calculator. You write down the problem and give it to him to run on his calculator and he writes down the answer and gives it to you.



Hardware Optimized Math: Do the math on your calculator

Next we want to speed things up even more by trying to remove all that overhead of copying data from our code to the FPU interface code. One way to do that is to do the task the FPU interface code does. So if we set the hardfp option when we do an arithmetic operation we now copy the values and the operation directly into the FPU hardware registers. Now we are super fast.



In this situation you have no paper, just a calculator. But think how fast that is since you can just type the values in yourself. No need to write it down, just type it in yourself.



So why so many options?

So why not always write your code with hardfp? Sometimes systems don't have FPU's...even in today's new computers. Remember before how with the softfp option we talked to the FPU but still have the code that copied data to the hardware? What if we could swap out where the copy destination was in cases where we don't have an FPU? Now we could say "Use the FPU if it exists, otherwise copy the data into our Software based calculations (softfloat)." So when you see that softfp is compatible with softfloat it means that the system will decide if it can (and some cases should) use a hardware FPU. If we compile with the hardfp option we have no choice but to use the FPU because the compiler optimizes our system to do no math, just read and write to FPU registers.



In our calculator example, if you sit down and have no paper and only a calculator the only option you have is to use the calculator. If you have a piece of paper you can either give it to your buddy or you can do it by hand. One gives you options and can be slow but practical. The other can be fast but only if the hardware exists.



So hows that for a long winded but hopefully easy to understand explanation?

xmpp: jecxjo@dukgo.com

Blog: http://jecxjo.motd.org/code