> 0.1 + 0.2 == 0.3 That fails Well, yes. The binary patterns on both sides are not exactly the same. When comparing floats, you should test if they are within 1 Epsilon of each other, then they are equivalent. Furthermore, you should always store all your data in full precision, and display it to the user with their expected precision. So all in all, a lot of errors come from the fact that languages make it too easy to do the wrong thing. One should be able to express "almost equal" and "round to 2 significant digits" in a non-annoying way. Storing in full precision is easy of course. There is a caveat with money though: Any intermediate result you show to users should be replaced with its rounded value. Otherwise you run the risk that calculations with irrational numbers (when you're dividing) round to a different last digit than when you repeat the calculation with the rounded values. This problem would also happen with decimal128.