Visual C++ rounds some decimal to double-precision floating-point conversions incorrectly, but it’s not alone; the gcc C compiler and the glibc strtod() function do the same. In this article, I’ll show examples of incorrect conversions in gcc and glibc, and I’ll present a C program that demonstrates the errors.

(To understand my methodology and terminology, please read my article “Incorrectly Rounded Conversions in Visual C++.”)

(Read my December 2013 update to this article: real.c Rounding Is Perfect (GCC Now Converts Correctly).)

(Read my October 2013 update to this article: GCC Conversions Are Incorrect, Architecture or Otherwise.)

(Read my September 2013 update to this article: Correctly Rounded Conversions in GCC and GLIBC.)

My Analysis

I ran my tests on an Intel Core Duo processor running Ubuntu Linux version 10.04, and I used the gcc C compiler version 4.4.3 and the embedded glibc library (eglibc) version 2.11.1. (As far as I can tell, eglibc uses the same conversion code as glibc, so I will refer to eglibc as glibc for the rest of this article.) (Update: I reran my tests on glibc, getting the same results.)

I did automated testing of glibc strtod() string conversions and manual testing of gcc floating-point literal conversions. This is what found in my automated testing of strtod():

Conversions were done incorrectly only about four ten-thousandths of a percent of the time.

No conversion was incorrect by more than one unit in the last place (ULP).

Incorrect conversions included both “hard” and “easy” cases.

There were incorrect conversions covering all three cases: “halfway”, “greater than halfway”, and “less than halfway.”

My manual testing of gcc was not as comprehensive, of course, but I did manage to find some examples of incorrect conversions.

For the remainder of this article, I will discuss six examples of incorrect conversions — examples uncovered by my automated and manual testing, as well as examples found in Bugzilla bug reports for gcc and glibc.

Incorrect Conversion by Both GCC and GLIBC

Here’s an example that’s converted incorrectly by both gcc and glibc — to the same incorrect value:

Example 1 (Halfway Case)

0.500000000000000166533453693773481063544750213623046875,

which equals 2-1 + 2-53 + 2-54, converts to this binary number:

0.100000000000000000000000000000000000000000000000000011

It has 54 significant bits, with bits 53 and 54 equal to 1. The correctly rounded value — in binary scientific notation — is

1.000000000000000000000000000000000000000000000000001 x 2-1

gcc and glibc strtod() both compute the converted value as

1.0000000000000000000000000000000000000000000000000001 x 2-1

which is one ULP below the correctly rounded value.

(I constructed this example by hand.)

Incorrect Conversions by GLIBC Only

Here are three examples that are converted incorrectly by glibc but converted correctly by gcc:

Example 2 (Halfway Case)

3.518437208883201171875e13 converts to this binary number:

1000000000000000000000000000000000000000000000.00000011

It has 54 significant bits, with bits 53 and 54 equal to 1. The correctly rounded value is

1.000000000000000000000000000000000000000000000000001 x 245

glibc strtod() computes the converted value as

1.0000000000000000000000000000000000000000000000000001 x 245

which is one ULP below the correctly rounded value.

(This example was taken from “Bugzilla Bug 3479: Incorrect rounding in strtod().”)

Example 3 (Greater than Halfway Case)

62.5364939768271845828 converts to this binary number, which has a non-terminating binary fraction portion:

111110.1000100101010111101010110101010011111001010001110000000000001…

Bits 53 and 54 are equal to 1, so the correctly rounded value is

1.11110100010010101011110101011010101001111100101001 x 25

glibc strtod() computes the converted value as

1.1111010001001010101111010101101010100111110010100011 x 25

which is one ULP below the correctly rounded value.

(I discovered this example during random testing.)

Example 4 (Less than Halfway Case)

8.10109172351e-10 converts to this non-terminating binary fraction:

0.00000000000000000000000000000011011110101011100101110101110111100001111110100001100011111111111110…

Significant bit 54 is 0, so it should be rounded down to get the correctly rounded value

1.10111101010111001011101011101111000011111101000011 x 2-31

glibc strtod() computes the converted value as

1.1011110101011100101110101110111100001111110100001101 x 2-31

which is one ULP above the correctly rounded value.

(I discovered this example during random testing.)

Incorrect Conversions by GCC Only

Here are two examples that are converted incorrectly by gcc but converted correctly by glibc:

Example 5 (Halfway Case)

The number

1.50000000000000011102230246251565404236316680908203125

which equals 1 + 2-1 + 2-53, converts to this binary number:

1.10000000000000000000000000000000000000000000000000001

It rounds correctly to 1.1, since bit 53 is 0.

gcc converts it to

1.1000000000000000000000000000000000000000000000000001

which is one ULP above the correctly rounded value.

(I constructed this example by hand.)

Example 6 (Less than Halfway Case)

9007199254740991.4999999999999999999999999999999995 converts to this binary number, which has a non-terminating binary fraction portion:

11111111111111111111111111111111111111111111111111111.011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111110…

Bit 54 is 0, so it converts correctly to

1.1111111111111111111111111111111111111111111111111111 x 252

gcc converts it to

1.0 x 253

which is one ULP above the correctly rounded value.

(This example was taken from “Bugzilla Bug 21718: real.c rounding not perfect.”)

A C Program that Reports the Behavior

Here’s a C program you can use to confirm incorrect rounding on your own Linux system:

// gcc -o examples examples.c #include <stdio.h> #include <stdlib.h> #define NUM_CONVERT 6 int main (void) { unsigned int i; char decimal[NUM_CONVERT][60] = { "0.500000000000000166533453693773481063544750213623046875", "3.518437208883201171875e13", "62.5364939768271845828", "8.10109172351e-10", "1.50000000000000011102230246251565404236316680908203125", "9007199254740991.4999999999999999999999999999999995" }; double gcc_conversion[NUM_CONVERT] = { 0.500000000000000166533453693773481063544750213623046875, 3.518437208883201171875e13, 62.5364939768271845828, 8.10109172351e-10, 1.50000000000000011102230246251565404236316680908203125, 9007199254740991.4999999999999999999999999999999995 }; char correct[NUM_CONVERT][30] = { "0x1.0000000000002p-1", "0x1.0000000000002p+45", "0x1.f44abd5aa7ca4p+5", "0x1.bd5cbaef0fd0cp-31", "0x1.8p+0", "0x1.fffffffffffffp+52" }; for (i=0; i < NUM_CONVERT; i++) { printf ("%s

",decimal[i]); printf (" Correct = %s

",correct[i]); printf (" gcc = %a

",gcc_conversion[i]); printf (" strtod = %a



",strtod(decimal[i],NULL)); } }

This is the output on my system (I shaded the incorrect conversions):

0.500000000000000166533453693773481063544750213623046875 Correct = 0x1.0000000000002p-1 gcc = 0x1.0000000000001p-1 strtod = 0x1.0000000000001p-1 3.518437208883201171875e13 Correct = 0x1.0000000000002p+45 gcc = 0x1.0000000000002p+45 strtod = 0x1.0000000000001p+45 62.5364939768271845828 Correct = 0x1.f44abd5aa7ca4p+5 gcc = 0x1.f44abd5aa7ca4p+5 strtod = 0x1.f44abd5aa7ca3p+5 8.10109172351e-10 Correct = 0x1.bd5cbaef0fd0cp-31 gcc = 0x1.bd5cbaef0fd0cp-31 strtod = 0x1.bd5cbaef0fd0dp-31 1.50000000000000011102230246251565404236316680908203125 Correct = 0x1.8p+0 gcc = 0x1.8000000000001p+0 strtod = 0x1.8p+0 9007199254740991.4999999999999999999999999999999995 Correct = 0x1.fffffffffffffp+52 gcc = 0x1p+53 strtod = 0x1.fffffffffffffp+52

The values for the correct conversions were obtained using David Gay’s strtod() function. I ran a separate program to compute them and then hard coded them in this program. (I could also have constructed the correct hex formats by hand, using my analysis in the examples above.)

If you try this on your system, let me know what happens; I expect the same output. Please include the type of Linux, and version numbers if possible.

Bug Reports

There are existing bug reports for gcc and glibc:

It’s not clear whether these bugs will (or should?) be fixed; if they are, I’d imagine David Gay’s strtod() function would be used to replace the current code.

I would guess this problem affects the other GCC compilers, although I haven’t tested.

GCC/GLIBC Vs. Visual C++

Here are the differences I found between gcc/glibc and Visual C++ in my two analyses:

glibc strtod() converted incorrectly about two orders of magnitude less frequently than Visual C++ strtod().

I found no integer values that rounded incorrectly in gcc or glibc, unlike in Visual C++.

gcc and glibc strtod() can convert the same decimal number differently; I found no such example in Visual C++ and its strtod() function.

Of the five examples Visual C++ converts incorrectly, only one — example 2 — is converted incorrectly by gcc or glibc (in this case, by both). Of the six examples in this article converted incorrectly by gcc or glibc, two are converted incorrectly by Visual C++: examples 1 and 2.

When gcc or glibc rounds halfway cases incorrectly, it does so either up or down. I found Visual C++ to consistently round incorrectly down.

I found an incorrect conversion for a “less than halfway case” in gcc/glibc; I found no such example in Visual C++.

Summary

I’ve demonstrated that a decimal floating-point number is not guaranteed to convert to its closest binary floating-point number. Also, I’ve shown that a decimal floating-point number may be converted to different binary floating-point numbers, depending on which compiler or run-time library is used.