SQLite Forum

Decimal128
Login
> Decimal representation only comes into play if you insist on cutting up your countables into powers of ten. But there is no particularly good reason for that.

Actually, the  'Good Reason' is if you want the computer calculation to match what you would get if you did it via paper and pencil (and then you also specify exactly how to round at each step). Decimal representation is designed (and was so developed) to mimic how we do the math 'by hand'.

Sometimes, this requirement/detail is specified in law or accept practice documents.

Think of how many times a beginner asks a question like way is 0.1 * 10.0 NOT 1.000000 and has to be taught about this 'issue' with binary floating-point. Except for backward compatibility reasons, the slight additional costs to do Decimal Floating-Point is small enough that if we were to start over, it could be argued that it would make more sense to use decimal floating-point rather than binary. The biggest impediment is the inertial in the current system. (I can't see very many application where the Decimal format doesn't meet the needs of something currently using the binary format, except the cost to switch).