I’ve talked a lot about floating point math over the years in this blog, but a quick refresher is in order for this episode.

A double represents a number of the form +/- (1 + F / 252 ) x 2E-1023 , where F is a 52 bit unsigned integer and E is an 11 bit unsigned integer; that makes 63 bits and the remaining bit is the sign, zero for positive, one for negative. You’ll note that there is no way to represent zero in this format, so by convention if F and E are both zero, the value is zero. (And similarly there are other reserved bit patterns for infinities, NaN and denormalized floats which we will not get into today.)

A decimal represents a number in the form +/- V / 10X where V is a 96 bit unsigned integer and X is an integer between 0 and 28.

Both are of course “floating point” because the number of bits of precision in each case is fixed, but the position of the decimal point can effectively vary as the exponent changes.



A few things to notice here: first, double has enormously larger range. The largest possible decimal is a paltry 7.9 x 1028, whereas the largest possible double is about 1.8 x 10308, a ridiculously large number. And also because of that big exponent, the size of the smallest positive (non-zero) number that can be represented by a double is far, far smaller than the smallest that can be represented by a decimal ; in that sense, double has vastly larger range on the “small” end. Second, decimal has an enormously larger precision in terms of the number of significant digits; double has 52 bits of precision and decimal has 96 bits. (In decimal digits, that’s the difference between 15 digits and 28 digits of precision.)

I am occasionally asked why it is that there is no implicit conversion either from double to decimal or from decimal to double . The easy answer is: there cannot be an implicit conversion from double to decimal because of the range discrepancy; a huge number of double s are larger than the largest possible decimal , and therefore an implicit conversion would either have to throw or silently lose perhaps an enormous quantity of magnitude, both of which are unacceptable. There could be an implicit conversion from decimal to double because that would only lose precision, not magnitude. C# already allows an implicit conversion from long to double , which can lose up to twelve bits of precision. The conversion from decimal to double would lose far more precision; going from 96 bit precision to 52 bit precision seems like too large a drop to make this implicit.

That’s the easy answer, and that alone would be sufficient. The somewhat less obvious reason to require conversions between decimal and double to be explicit is because that conversion is fundamentally a strange thing to do and you should think hard before you do it. decimal is typically used to represent exact numeric quantities, particularly for financial computations that need to be accurate to a fraction of a penny. double is typically used to represent physical quantities such as length, mass and speed, where the precision of the representation is higher than the precision of the measurement, and therefore tiny errors in representation matter less. These are two very different approaches to computation, and so it’s probably a bad idea to allow them to mix without making the code wave a big flag in the form of a cast, calling attention to what is going on.