In .net, how do I choose between a Decimal and a Double

I usually think about natural vs artificial quantities.

Natural quantities are things like weight, height and time. These will never be measured absolutely accurately, and there’s rarely any idea of absolutely exact arithmetic on it: you shouldn’t generally be adding up heights and then making sure that the result is exactly as expected. Use double for this sort of quantity. Doubles have a huge range, but limited precision; they’re also extremely fast.

The dominant artificial quantity is money. There is such a thing as “exactly $10.52”, and if you add 48 cents to it you expect to have exactly $11. Use decimal for this sort of quantity. Justification: given that it’s artificial to start with, the numbers involved are artificial too, designed to meet human needs – which means they’re naturally expressed in base 10. Make the storage representation match the human representation. decimal doesn’t have the range of double, but most artificial quantities don’t need that extra range either. It’s also slower than double, but I’d personally have a bank account which gave me the right answer slowly than a wrong answer quickly 🙂

For a bit more information, I have articles on .NET binary floating point types and the .NET decimal type. (Note that decimal is a floating point type too – but the “point” in question is a decimal point, not a binary point.)

Leave a Comment