SQLite Forum

SQLITE_ENABLE_STAT4 triggers failures in analyze3.test

SQLITE_ENABLE_STAT4 triggers failures in analyze3.test

(1) By 0xjnml on 2020-08-26 12:12:16 [link] [source]

Probably the tests just need to be adjusted for updated output of EXPLAIN ( and No idea about the other failures.

sqlite-src-3330000$ cat bug.sh
make distclean
make testfixture
cd test
../testfixture permutations.test prepare analyze3.test
sqlite-src-3330000$ ./bug.sh


! prepare.analyze3- expected: [/*SCAN TABLE t1*/]
! prepare.analyze3- got:      [4 0 0 {SEARCH TABLE t1 USING INDEX i1 (x>? AND x<?)}]
! prepare.analyze3- expected: [/*SCAN TABLE t1*/]
! prepare.analyze3- got:      [4 0 0 {SEARCH TABLE t1 USING INDEX i1 (x>? AND x<?)}]
! prepare.analyze3-1.1.8 expected: [999 999 499500]
! prepare.analyze3-1.1.8 got:      [2000 0 499500]
! prepare.analyze3-1.1.9 expected: [999 999 499500]
! prepare.analyze3-1.1.9 got:      [2000 0 499500]
! prepare.analyze3-1.2.8 expected: [999 999 text text 490555]
! prepare.analyze3-1.2.8 got:      [1981 0 text text 490555]
! prepare.analyze3-1.2.9 expected: [999 999 integer integer 490555]
! prepare.analyze3-1.2.9 got:      [1981 0 integer integer 490555]
! prepare.analyze3-1.3.8 expected: [999 999 499500]
! prepare.analyze3-1.3.8 got:      [2000 0 499500]
! prepare.analyze3-1.3.9 expected: [999 999 499500]
! prepare.analyze3-1.3.9 got:      [2000 0 499500]
! prepare.analyze3-2.6 expected: [102 0 100]
! prepare.analyze3-2.6 got:      [999 999 100]
! prepare.analyze3-2.8 expected: [102 0 0]
! prepare.analyze3-2.8 got:      [999 999 0]
! prepare.analyze3-2.9 expected: [12 0 0]
! prepare.analyze3-2.9 got:      [999 999 0]
! prepare.analyze3-2.10 expected: [3 0 1]
! prepare.analyze3-2.10 got:      [999 999 1]
! prepare.analyze3-2.11 expected: [102 0 10]
! prepare.analyze3-2.11 got:      [999 999 10]


SQLite 2020-08-14 13:23:32 fca8dc8b578f215a969cd899336378966156154710873e68b3d9ac5881b0ff3f
13 errors out of 107 tests on e5-1650 Linux 64-bit little-endian
!Failures on these tests: prepare.analyze3- prepare.analyze3- prepare.analyze3-1.1.8 prepare.analyze3-1.1.9 prepare.analyze3-1.2.8 prepare.analyze3-1.2.9 prepare.analyze3-1.3.8 prepare.analyze3-1.3.9 prepare.analyze3-2.6 prepare.analyze3-2.8 prepare.analyze3-2.9 prepare.analyze3-2.10 prepare.analyze3-2.11
All memory allocations freed - no leaks
Memory used:          now          0  max     282208  max-size      65544
Allocation count:     now          0  max        319
Page-cache used:      now          0  max          0  max-size       1288
Page-cache overflow:  now          0  max      82952
Maximum memory usage: 282208 bytes
Current memory usage: 0 bytes
Number of malloc()  : -1 calls

(2) By Dan Kennedy (dan) on 2020-08-27 14:55:44 in reply to 1 [link] [source]

Thanks for the report. I don't think we have a configuration that runs analyze3.test with that permutation. Now fixed here:


There's an explanation in comments within the commit if you're interested.

(3) By doug (doug9forester) on 2020-08-27 16:17:11 in reply to 2 [source]

Dan (just for my interest): When you are made aware of a problem with "don't have a config that tests with that permutation", do you always create a new test case to cover it?

(4) By Richard Hipp (drh) on 2020-08-27 19:27:55 in reply to 3 [link] [source]

There are 87 different compile-time options that we test for (and probably others that I overlooked in my brief analysis). It is not possible to write tests for every possible combination of those 87 compile-time options.

(5) By 0xjnml on 2020-08-27 19:34:43 in reply to 4 [link] [source]

I concur that 2^87 is a bit of an overkill ;-)

But maybe considering all of those 87 options, one by one, would be a viable option?

(6) By Tim Streater (Clothears) on 2020-08-27 20:02:13 in reply to 5 [link] [source]

Wouldn't it be more like at least 87! ??

Seems that 87! is around 2.1e132, according to the Giant Brain.

(7) By Andreas Kupries (andreas-kupries) on 2020-08-27 20:15:13 in reply to 6 [link] [source]

I do not believe so. n! is when order matters, i.e. you have n! permutations of n elements.

However for the sqlite config flags I am pretty sure that order does not matter, thus we have combinations, not permutations, and only 2^87 = 154,742,504,910,672,534,362,390,528 of them (assuming the flags are simple on/off, or other binary).

(8) By doug (doug9forester) on 2020-08-27 21:28:14 in reply to 4 [link] [source]

OK, it's a lot of tests. But the real number of permutations is a lot less than 87! because of mutual exclusion. Still a lot!

Just throwing this out: for several years I used my computer to run a SETI program analyzing data streams looking for alien signals. Literally millions of computers put all their idle time into the analyses.

What if we could use the same technology to examine SQLITE code and compile options. What would be the worthiness of such an endeavor? Probably we might find a few compiler errors. Or, we might discover some dead code if a certain combination of compile options were selected. Since the code quality is already very high (very few bugs), I would not expect the uncover many bugs.

But, if you tied the options to code segments and code segments to SQL statement pieces, then you could automate the test case generation. You could visualize the code in new and different ways.

We might start by partitioning the compiler options into groups of 5 or 10 that have something in common. Then dole each group out to 100 or a 1000 computers and let them crunch. Pull the results back and see what they say, what there is to learn.

The partitioning of the code is easy for a given set of compiler options. We have tools all over the place to allow us to gen deltas. If we use the delta generators in a new way we can gen a set of deltas for code changes for a single compiler option change. And by being smart about where in the code compiler options make a difference, we can reduce the permutations. We would find out rather quickly which compiler options interacted with each other. And which ones interacted with each other in strange and wonderful ways.

We can use SQLITE (of course) for the database underlying the analysis. You only need a pre-compiler to do this analysis (if you trust that all the various pre-compilers will generate the same code). How to tie the SQL to the code? We use Lemon for parsing, right? Take the output of Lemon and relate it to code segments. Since the code segments are influenced by the compiler options, one can link Lemon output and compiler options.

We need a body of SQL to drive this analysis. Collecting a unique set of SQL statements is another 87! problem which can be partitioned like compiler options can. I don't know lots of things about lots of things, but my application world revolves around database. And I'd like to understand it a lot better.

(9) By ddevienne on 2020-08-28 07:21:50 in reply to 8 [link] [source]

[...] Literally millions of computers put all their idle time into the analyses [...] we could use the same technology to examine SQLITE code and compile options

Interesting idea. I'm sure lots of us would donate compute-time.

But knowing DRH, he'd want to build it himself :) Taking his brain power away
from SQLite (and Fossile, and Lemon, and whatever else he does privately for
his clients).

Be careful what you wish for!

(10) By Ryan Smith (cuz) on 2020-08-29 17:02:11 in reply to 8 [link] [source]

...SETI program analyzing data streams looking for alien signals. Literally millions of computers...

I certainly commend the vision!

Any problem could be solved, possibly much better, when throwing near infinite resources at it [Any other Factorio players here? :)], but what Engineers have to always be keenly aware of, is the Value proposition.

The reward of finding one more obscure test-case that isn't covered by the existing set, to uncover a possible problem for a compile-mutation with prevalence factor of 1 in Millions-Billions (unpronounceable figures as others demonstrated), is quite low. I'm not trying to down-play it, let's regard it as an important problem, but the point here is the incidence-level, not problem severity.

I'd say the best thing to do is what is being done - i.e:
"Programmers of Earth, please compile SQLite any ways you like, and if you find a combination that fails any test in any new release, please report it, we will add it to the test set."

Fast, efficient, Zero-added energy, solves all encountered problems as fast, with the one very minor disadvantage: The problem has to actually be "encountered" first.

I say "minor disadvantage", because while this might be an inadequate philosophy when building a control system for a Nuclear power-plant, in programming, most software do go through quite a lot of testing and verification, and time for this is allocated to every project's budget. Saving those few, who use a truly unique compile-set, about a Day's worth (that's the typical turn-around time) would gain you no more than a mild "Thanks hey." They typically have other bigger problems.

Hardly worth extra effort, never mind SETI-level effort.

Still, perhaps a public script or small app that does nothing but run your compiles, run the tests, then in the even of failure, send your compile-options, System info plus the failure-detail to a small web API or such where the SQLite people can examine it.

Currently this is pretty much what the forum does with the slight deficit that some people might not send in their failures (or not adequately detail them), we only see the ones who do. But then they may as readily choose not to run the script/app.

I honestly doubt there is much improvement to be had over the current.