Sadly, I have never met another programmer who knew about the difference between binary and decimal floating point and they go along their using binary floating point (floats and doubles) for all their calculations. It is also very sad that most of the “popular” programming languages being used right now only support data types native to a modern CPU, i.e. binary floating point only. Don’t they teach this in college any more? Thankfully languages like COBOL provide types for properly working with decimal numbers, or we would all be in trouble right now (well, more trouble anyway).

Ironically, your chosen scale multiplier of 0.1 happens to be one of the numbers that binary floating point cannot represent exactly. It is stored as an approximation like 0.999999999999997 or whatever. Typically this won’t be a problem until you start to run out some multiplication or division on the number and increase the error. The value 0.1 also happens to be how most would write “ten cents” in money calculations, and imagine a large bank that had a system where the programmers used binary floating point for your saving account, or for doing their million and billion dollar transactions…

If you want to work with decimal floating point and have your math work, then use integers scaled up to your largest rounding policy (i.e.: 10000 would be 1 dollar with the two “cents” digits, plus two digits of precision after that for calculations and rounding), or use a library designed for decimal numbers, like this one that has been around for over 20 years…

http://speleotrove.com/decimal/

@niggler: If you *really* want an example of a floating point bug, try this:

]]>(thanks for the link though) ]]>

http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf

If you want an example of a real FP bug, there was a real bug in the floating point division operation on pentiums: http://www.intel.com/support/processors/pentium/sb/CS-013007.htm

]]>NOW FOR CHECK-IN!

]]>