What happens if I define LONGDOUBLE_TYPE as double?
I'm building SQLite on a platform using a compiler whose support for the long double data type is broken. What's the ramifications of replacing the default definition of LONGDOUBLE_TYPE to just double? It looks like that data type is only used in a few places to get better precision on the 15th digit when converting numeric text strings to or from REAL data type.
That is what happens when using Microsoft compilers (for example Microsoft VIsual Studie on x64) because then the declaration
long double is ignored (the long part) and it just becomes
double. Microsoft compilers no longer generate code that is IEEE compliant.
You can define LONGDOUBLE_TYPE to be whatever you want the LONGDOUBLE_TYPE spelling to be -- it defaults to the spelling
long double only if no other definition is given.
THe idea is that it gives the spelling of the "extended precision" declaration type for your particular compiler and platform.
It depends what you mean by "broken". Do you mean "broken" as in the Microsoft way, where the compiler still accepts the
long double as the spelling for the double precision extended type but does not actually implement the type (it ignores the
You can always define LONGTYPE_DOUBLE as double.
Myself, I declare LONGTYPE_DOUBLE as
__float128 so that high precision intermediates use 128-bit IEEE-754 floats.
(3) By Scott Robison (casaderobison) on 2021-09-02 19:33:24 in reply to 2 [source]
Unless it has changed in recent years (as IEEE-754 evolves with time), 80 bit extended precision has never been an "official" IEEE standard. Yes, it is compatible with IEEE guidelines, but it isn't a blessed rigidly standardized format like 32 bit single precision floating point and 64 bit double precision floating point. And "long double" is only required to be as precise as "double" ...
It would certainly be nice if "long double" supported more precision than "double" but none of the standards require that to be true.
"Extended Precision" is a part of the specification. The "extended precision" requiremnent for a given type requires that the "extended precision" type be AT LEAST as precise as the base type, up to the precision of the next larger type.
base-2 floating point typically has "standard" types for 32-bit, 64-bit, 128-bit, 256-bit, 512-bit, and 1024-bit representations.
The "extended" 64-bit format must have at least the same precision as the standard 64-bit float and not more than the precision of a 128-bit float, however, you are correct that the representation of a base-2 extended precision floating point number is implementation defined.
Typically, a "long double" is ten bytes (80-bits) simply because the Intel 8087 math co-processor used that format internally -- it is how the IEEE requirement to "compute exactly then round half-even" was implemented (which implementation was really "compute with guard bits then round").
Some compilers treat "long double" as a 96-bit "storage space" (12 bytes) but what is stored there can be anything ranging from a IEEE 64-bit double up to some extended precision encoding.
Some compilers treat "long double" as a 128-bit space (one paragraph or 16 bytes) and may store in that space anything from a single IEEE 64-bit float (with padding), some internal representation that is implementation specific but more precise than a IEEE 64-bit float and less precise than a IEEE 128-bit float, or maybe even a IEEE 128-bit float.
(6) By Scott Robison (casaderobison) on 2021-09-02 23:12:48 in reply to 5 [link] [source]
All agreed. My point is that if there is not an IEEE requirement for an 80 bit / 10 byte type, one can't be held for not upholding a requirement that doesn't exist.
I mean, there are plenty of things to complain about when it comes to Microsoft, up to and including "I don't like they way they don't provide a separate long double type that has increased precision beyond double in their C family of compilers". My only point was the long double and their ancient historical use of "long double" for 80 bit extended precision values are not requirements, even if they would be advantageous.
It's broken in various, odd, ways (perhaps it's trying to software implement 128bit IEEE). One manifestation with SQLite is sometimes the number formats and sometimes it shows NaN, even repeating the same statement. If my workaround is serviceable for the time being, I don't want to let the perfect be the enemy of the good.
I heard from an authoritative source that the compiler is incomplete and the goal is to use a software implementation of 128-bit IEEE.
I did some quick tests on the SQLite that came with my Mac Mini and Raspberry Pi4 and found that the expression "CAST(9223372036854774800 AS real)" displays as 9.22337203685477e+18 on the former and 9.22337203685478e+18 on the latter. The Pi shows the same rounding error that I get defining LONGDOUBLE_TYPE as double.
That's one use for the test suite(s). Try it, run the entire test suite, and see what fails.