And the issue here is that there is NO easy way to decide on exactly what value of epsilon you should use for a comparison because how accurate your result is will ve very much a product of what you have done for math in the middle and the relative size of the numbers. The core of the issue is that binary floating-point needs to be thought of as approximations for real numbers (especially with fractions) even for numbers with low precision, while decimal floating-point can EXACTLY represent written numbers, as long as they are within the precision range. If something costs $1.19, that is an EXACT number, but in binary floating-point, I can't express it, just a 'nearest value', and need to keep doing rounding to keep the approximations from deteriorating. Decimal floatig=point gets around this problem by using the same base that we do, so it can exactly represent the same numbers we would write down, so the numbers we think of as exact, are exact (until we write too many digits). Yes, with decimal floating-point, after doing a multiply or divide, you likely need to perform a rounding operation if you want to exactly mimic the paper and pencil result because that will be needed to mirror the decision of what precision that operation is to be performed to. What you can do is add/subtract an unlimited number of these, and as long as your sum doesn't exceed your precision limits, the result will be exact, because the input numbers will be, and the math will be. This is different than with a binary floating-point where we started with a representational error, and that error will accumulate with the addition.