SQLite Forum

bUseLongDouble as a constant
Login

bUseLongDouble as a constant

(1) By Nuno Cruces (ncruces) on 2024-09-30 23:10:20 [source]

Could the 3 places where sqlite3Config.bUseLongDouble is used be prefixed like so:

if( sizeof(LONGDOUBLE_TYPE)>8 && sqlite3Config.bUseLongDouble ) { ... }

Currently, doing the following is enough to disable use of long double:

#define LONGDOUBLE_TYPE double

But because sqlite3Config.bUseLongDouble is not a compile time constant, a bunch of dead code is still kept around. It might not seem much, but platforms are starting to map long double to __float128, which isn't available in hardware, is often much slower, and pulls in library code.

(2) By Richard Hipp (drh) on 2024-09-30 23:55:48 in reply to 1 [link] [source]

I discovered that you cannot assume that the C-compiler supports high-precision floating point just because sizeof(long double)>8. Some compilers make sizeof(long double)==10 or ==16, but they still internally use ordinary 64-bit doubles for all the computation. Trying to use such a long-double would result in inaccuracies in some computations.

If sizeof(long double)==8, then I know that high-precision floating point is not supported. But if sizeof(long double)>8, I do not know that high-precision floating point is supported.

So, I have to check for high-precision doubles at run-time. And that means both code paths need to be in code.

What platforms are you seeing that are mapping long double into __float128? I'm guessing I don't have any of those platforms here.

(3) By Nuno Cruces (ncruces) on 2024-10-01 06:58:47 in reply to 2 [link] [source]

To be clear, I'm not proposing to assume you have a working long double just because sizeof(long double)>8, I'm proposing that if it isn't >8 you can't possibly have it.

Having a working long double would still be checked at runtime, not having it might be decided at compile time.

As for platforms, Wasm compiled by LLVM/Clang 18.1, has sizeof(long double)==16 and brings in a lot of library code to actually implement that.

I don't know what your Emscripten toolchain looks like, but it's possible your Wasm binaries are using LLVM long double emulation (which actually works, but is slow and bulks the binaries).

(4) By Stephan Beal (stephan) on 2024-10-01 09:55:16 in reply to 3 [link] [source]

I don't know what your Emscripten toolchain looks like,

We don't either - it's all black-box, hidden behind the Emscripten SDK.

but it's possible your Wasm binaries are using LLVM long double emulation (which actually works, but is slow and bulks the binaries).

But i will look into this to see if it's something we can disable, as it's useless in the JS side (which makes me suspect that it's disabled by default).

(5.1) By Stephan Beal (stephan) on 2024-10-01 10:50:04 edited from 5.0 in reply to 4 [link] [source]

... will look into this to see if it's something we can disable...

According to these Emscripten docs they don't have any support for long double, so this shouldn't affect our WASM builds.

These other Emscripten docs say:

While LLVM’s wasm32 has long double = float128, we don’t support printing that at full precision by default. Instead we print as 64-bit doubles, which saves libc code size.

Which suggests that our wasm long-double bits may be compiling with float128 with no apparent option to do otherwise (Emscripten doesn't have a setting to disable that, but might be doing so on its own by default).

Finally, however, according to comments in this github ticket, Emscripten specifically does not support long double because it's not portable to JS.

We "could" define LONGDOUBLE_TYPE to double in the WASM builds. Hypothetically, no existing clients can be storing floating points large enough to be broken by that because there is no mapping to JS for floating-points that large. Our JS bits jump through some hoops to try to ensure that client-entered numbers fit in one of the available numeric types, and several functions throw if an attempt is made to convert way-too-big numbers.

i just ran build comparisons with the default LONGDOUBLE_TYPE vs double, in both -O0 and -Oz builds (which we use for distribution), and using double results in a mere 2kb reduction of the wasm file size. (Sidebar: -Oz builds perform within about 10% of the speed of -O2, while providing smaller wasm files, and -O2 invariably outperforms -O3 on the wasm builds, for reasons known only to those implementing the compiler optimizations.)

(6) By Nuno Cruces (ncruces) on 2024-10-01 14:06:55 in reply to 5.1 [link] [source]

You should see a bigger reduction in size if, besides defining LONGDOUBLE_TYPE, you make the change in the top post.

Instead of:

if( sqlite3Config.bUseLongDouble ) { ... }

Do:

if( sizeof(LONGDOUBLE_TYPE)>8 && sqlite3Config.bUseLongDouble ) { ... }

I'll be honest though, it's not the reduction in size that I was after. I'm trying to avoid needing the __float128 support library, because that's giving me linker errors when I compile with -mmulti-value.

(7) By Stephan Beal (stephan) on 2024-10-01 14:43:14 in reply to 6 [link] [source]

You should see a bigger reduction in size if, besides defining LONGDOUBLE_TYPE, you make the change in the top post.

It does indeed increase the change: to 6kb.

i have these changes made locally, and the test suite is running (and will be for a while still!), but whether they can/should be applied depends on Richard's assessment and whether these additions will affect the 100% branch test coverage.

Sidebar: whether we can safely change LONGDOUBLE_TYPE to double for the wasm builds is still up for discussion. It my current opinion that that would be a near-zero-risk change.

(8) By Richard Hipp (drh) on 2024-10-01 16:58:41 in reply to 6 [link] [source]

New compile-time options:

  • -DSQLITE_USE_LONG_DOUBLE=0 → Never use "long double". Omit all "long double" code from the build. Use the Dekker algorithms in places where high-precision floating point is needed.

  • -DSQLITE_USE_LONG_DOUBLE=1 → Always use "long double" in cases where high-precision floating point computations are needed. Omit the Dekker algorithms.

(9) By Stephan Beal (stephan) on 2024-10-01 17:15:16 in reply to 8 [link] [source]

-DSQLITE_USE_LONG_DOUBLE=0

And the wasm build now uses that by default. Long doubles don't make sense in those builds.

(10) By jose isaias cabrera (jicman) on 2024-10-01 18:59:56 in reply to 8 [link] [source]

So, -DSQLITE_USE_LONG_DOUBLE=1 is more precise to business amounts computations/calculations?

(11) By Richard Hipp (drh) on 2024-10-01 19:39:59 in reply to 10 [link] [source]

No, it is not more precise. It is just slightly faster for some high-precision floating point computations.

Working hardware-supported long double is only available on x86/x64 and then only if the OS sets up the float-point processor to use it. (Linux usually does, Windows usually does not.) Long double is not available in hardware on any ARM processor that I am aware of. So these days, long double isn't even available for most use cases of SQLite.

I'm thinking about just dropping all support for long double in SQLite. It isn't an API - it is just an internal optimization - so I'm free to do that. Advantages of dropping long double:

  • Saves about 1,000 bytes in the compiled binary
  • Simplifies the code
  • Makes the code more portable
  • Reduces testing complexity

The downside is that omitting long double makes the code about 0.045% slower on machines where hardware long double support is available. That amount of performance hit is unmeasurable in practice. By comparison, the upcoming 3.47.0 release will be about 0.46% faster than 3.46.0. So the performance gain in upgrading from 3.46.0 to 3.47.0 more than makes up for the performance loss associated with abandoning long double.

(12.1) By Nuno Cruces (ncruces) on 2024-10-01 21:38:49 edited from 12.0 in reply to 11 [link] [source]

Right.

The problem with long double is that the likelihood of hardware support is slim and currently diminishing.

If the compiler mapped long double to a Dekker “double-double” implementation, that could be faster than your version (maintained by the compiler team, done in assembly).

But increasingly, the compiler is actually trying to map it to a IEEE 754 __float128 with no hardware support, and that's much slower, for very little benefit.

So even if you detect it, it's anyone's guess if it'll be worth it.

This is unlike using __int128 from the compiler, which is probably a good idea for which compiler provided assembly intrinsics will help performance greatly.

PS: just to leave a reference for my claims, this is the LLVM implementation of soft float addition that's used for __float128.