SQLite Forum

Timeline
Login

50 events by user drh occurring around 2021-10-26 10:21:25.

More ↑
2021-11-20
02:58 Reply: Proposed slightly-incompatible change to date-time functions artifact: 54f40d17c9 user: drh

The point I am making is that because it does this, you can't tell if the original was inserted as an integer or as a real if the affinity of the column allows this transform.

If the affinity is INTEGER, then you can insert 22 or 22.0 and you will always get back 22. If the affinity is REAL, you can insert 22 or 22.0 and you will always get back 22.0. But if the affinity is NUMERIC or BLOB then you always get back what you inserted. If you insert 22 you get back 22, and if you insert 22.0 you get back 22.0.

02:48 Reply: IS NULL optimization on NOT NULL constraints breaks on corrupted databases artifact: dd4d62e69a user: drh

The issue here was that GDAL segfaulted when opening that database because of the null dereference of a corrupted record.

Did SQLite segfault? If so that would be a bug that we need to look into.

Or, did GDAL do an sqlite3_column_text() and get back a NULL pointer, then dereference the NULL pointer, thus causing a segfault. That would be a bug in GDAL. The sqlite3_column_text() might return NULL, even for a NOT NULL column, for example following an OOM error. That's part of the specification of how sqlite3_column_text() works. If you were not checking for a NULL return from sqlite3_column_text() and it always worked before, that is because you were lucky. It would be to same if you failed to check for a NULL return from malloc().

2021-11-19
21:42 Reply: IS NULL optimization on NOT NULL constraints breaks on corrupted databases artifact: 5dbb5ad543 user: drh

This causes a regression in the GDAL open source project

Please help me to understand why the GDAL project is concerned that you are getting an incorrect answer from a query against an admittedly corrupt database file? Why is this not just a case of GIGO?

15:49 Reply: Proposed slightly-incompatible change to date-time functions artifact: 6df0d89eaa user: drh

SQLite could take times that just happen to be exactly at noon, and thus equal to an integer, and store it in the database as an integer as a form of compression

SQLite does indeed do this. But it also converts the integer back into a floating point number as it reads it back off the disk, so by the time it gets to the date/time SQL functions, it is a REAL again.

Maybe adding an 'Auto' modifier

That's a good idea. I'll consider that.

Recognize that all of this discussion is for 3.38.0 or later. This is not a change that is being considered for the upcoming 3.37.0 release.

13:16 Reply: Is it possible to restrict VACUUM INTO? artifact: a2e181c2a9 user: drh

You can add an sqlite3_set_authorizer() restriction on the ATTACH command that causes it to fail.

VACUUM INTO is really something of a "meta" command. It works by using SQLite APIs recursively. It first runs "ATTACH" to create the new output database. Then it does a bunch of "CREATE" and "INSERT" commands on that attached database to perform the transfer, followed by a "DETACH" at the end. So if you block the initial ATTACH, you effectively prevent VACUUM INTO from running.

The sqlite3_set_authorizer() has the full name of the file being attached, so you can selectively block ATTACH (and hence VACUUM INTO). In this way, you could make it so that VACUUM INTO only works for output files in some limited subset of the filesystem namespace.

13:07 Post: Proposed slightly-incompatible change to date-time functions artifact: 92d9dbc028 user: drh

As you know, SQLite currently supports three different representations for date/time values:

  1. TEXTYYYY-MM-DD HH:MM:SS.SSS

  2. REAL → The Julian day number, the number of days since noon (in Greenwich) on -4714-11-24, according to the proleptic Gregorian calendar.

  3. INT → Unix-time or the number of seconds since 1970-01-01 00:00:00.

In all cases, SQLite supports date ranges from 0000-01-01 00:00:00 through 9999-12-31 23:59:59. For the various date/time functions (date(), datetime(), time(), julianday(), strftime()) the date/time argument can be either TEXT or REAL and everything just works. The date/time argument can also be INT, but such INT values will still be interpreted as a Julian day number unless followed by the 'unixepoch' modifier.

Proposed change

If the date/time argument is an INT value that is not within the range of 1721060..5373483, then it is automatically interpreted as a Unix timestamp, without needing the 'unixepoch' modifier. The 'unixepoch' modifier can still be used, but it would be redundant in this case.

Benefits

With this enhancement, queries could invoke SQL functions like datetime() on a column of type DATETIME without having to worry about the underlying storage format. The conversion would be completely automatic. Indeed, columns that are intended to store date/time values could hold a mixture of TEXT, REAL, and INT values, and queries using the date/time functions would always be able to interpret the values correctly.

Possible Breakage

SQLite only defines date/time values for 0000-01-01 through 9999-12-31. So for that date range, I don't see any way to mess this up. There is an ambiguity for numeric inputs - do we interpret them as Julian day or Unix timestamps - but this ambiguity only occurs for a limited range of values between 1970-01-20 22:04:19 and 1970-03-04 04:38:03. If you have unix timestamp value in that range, and you pass them into a datetime function as REAL numbers instead of as integers, they will get reinterpreted as Julian day numbers unless you use the 'unixepoch' modifier.

Perhaps a bigger problem would be Julian day numbers that are passed into into date/time functions as integers. That is only possible for dates that correspond to noon UTC. Such integer Julian day numbers would be interpreted as unix timestamps and transformed into date/times in vicinity of 1970-01-29. We could mitigate this by adding a new 'julianday' modifier that forces the date/time argument to be interpreted as a Julian day number even if it is an integer.

What do you think?

Is the benefit of this change worth the small chance of breaking legacy applications?

2021-11-17
16:01 Reply: SQLite + TCL on-line conference starts in less than 2 days artifact: 8c22e1f3c1 user: drh

Do STRICT tables enforce lengths of TEXT columns?

It does not. A question back at you → Why do you think that limiting the length of TEXT columns would be useful? What kinds of problems would that solve?

[T]he ANY data type ... appears to undo all that STRICT promises to deliver.

I do not understand that perspective. Can you explain more about how ANY is detrimental?

2021-11-15
16:39 Reply: version 3.37.0 has 2 OOM bugs in sqlite3StrAccumEnlarge and sqlite3BtreeIndexMoveto, could cause sqlite3 DOS. artifact: d9ea17e953 user: drh

Here is a link to the OP's tweet about this topic:

https://twitter.com/den80369019/status/1460277629668323334

14:30 Post: SQLite + TCL on-line conference starts in less than 2 days artifact: 54324e566b user: drh

The SQLite + TCL conference begins on Wednesday morning (2021-11-17). The conference is on-line (Zoom) and free. Sign-up at https://conf.tcl-lang.org/

I will be leading with a review of the enhancements to SQLite that have occurred over the previous 12 months, followed by a discussion of where we are planning to take SQLite in the future. I'll strive to be brief so as to to reserve plenty of time for Q&A. Looking forward to your feedback.

2021-11-12
17:33 Reply: version 3.37.0 has 2 OOM bugs in sqlite3StrAccumEnlarge and sqlite3BtreeIndexMoveto, could cause sqlite3 DOS. artifact: 4867042314 user: drh

The error appears to be inside of ASAN, probably at https://github.com/gcc-mirror/gcc/blob/master/libsanitizer/sanitizer_common/sanitizer_common.cpp#L53, not something in SQLite. It appears that you are running out of memory on your system, and ASAN is choking because of that.

What does the command "prlimit" show on your system? In particular, what is your "address space limit"? Mine shows "unlimited", and when I run your scripts I get:

Error: near line 5: stepping, string or blob too big (18)

I modified your first scripts by adding ".stats on" and ".echo" giving the following:

.echo on
CREATE TABLE v0 ( v1 INTEGER PRIMARY KEY DESC ON CONFLICT REPLACE CHECK( max ( 'Al' , v1 ) + julianday ( ( v1 IN ( v1 , v1 , v1 ) ) ) <= 'x' ) ) ; INSERT INTO v0 ( v1 ) VALUES ( 0 ) ,( 42 ) ,( 10 ) ;
INSERT INTO v0 SELECT randomblob ( 1 ) FROM v0 , v0 AS t JOIN v0 ; SELECT DISTINCT v1 AS REVENUE FROM v0 AS r WHERE v1 = v1 AND v1 = 10 AND v1 = 0 ORDER BY v1 ;
.stats on
UPDATE v0 SET v1 = ( SELECT group_concat ( substr ( v0 . v1 , 'GERMANY' ) , v1 ) FROM v0 AS p ) ;

I compile thusly:

CC=clang CFLAGS='-fsanitize=address -fno-omit-frame-pointer -Os -g' ./configure --enable-all --enable-debug
make sqlite3

Then running your script, I get the error above, and the ".stat" output shows that the process used a bit of 9GB of RAM at its peak, before giving up and reporting the error. My desktop has 32GB of RAM. How much RAM is on your test system?

It seems to me that SQLite is doing exactly the right thing here, and that the problem is in your test procedure or in your copy of libasan.so. Why do you think this is a problem with SQLite? What do you think it should be doing differently?

2021-11-09
13:03 Reply: Odd behaviour of UNION artifact: 8450d6b3c2 user: drh

The ORDER BY, LIMIT, and OFFSET apply to the UNION, not to the second SELECT. You want this, I think:

SELECT max(c2) FROM A
UNION
SELECT * FROM (SELECT DISTINCT c2 FROM A ORDER BY c2 DESC LIMIT 1 OFFSET 1);
10:35 Reply: Wasted file space estimate artifact: 7cb6a4e088 user: drh

You really shouldn't open() and close() SQLite database files using your own code. Use SQLite APIs only. The reason for this is that if another thread in your application has the database open using SQLite, the close() call you make will clear the posix advisor locks. The SQLite connection has no way to prevent or detect this. Clearing locks out from under an SQLite database connection could lead to database corruption.

A better solution would be to open the database using sqlite3_open() or similar, and then run "PRAGMA page_size; PRAGMA freelist_count;" to get the values you are looking for.

2021-11-08
23:27 Reply: A segmentation fault in SQLite latest release build artifact: ac825c1563 user: drh

Plans did change.

The end solution (seen at check-in 74aec5dd1df95b56) was to refactor the logic that figures out which query in a cascade of nested and correlated queries a particular aggregate function belongs too. The new logic does not depend on subquery cursor numbers being greater than outer query cursor numbers. That constraint is no longer on the parse tree, and so no new assert() logic to check is was required.

Should be working now.

14:52 Reply: A segmentation fault in SQLite latest release build artifact: 82e3696053 user: drh

Thanks for the bug report.

The crash is due to a NULL pointer dereference in the byte-code engine caused by incorrect byte-code. The incorrect byte-code results for a fault in the code generator.

Each table or subquery in a complex SELECT statement is assigned a cursor number. The name resolution logic for aggregate functions depends on the fact that cursor numbers for subqueries are always greater than cursor numbers in outer queries. But that assumption was violated by a new UNION ALL optimization that as added on 2020-12-19. The query in question invokes that optimization, causing cursor numbers to be misordered, resulting in incorrect byte-code, and ultimately the NULL pointer dereference.

A simplified query is this:

SELECT * FROM (
  SELECT 1 FROM rt0 AS q3
  UNION ALL
  SELECT 2 FROM rt0 AS q4
) LEFT JOIN (
  SELECT * 
    FROM (
           SELECT (SELECT sum(q2.c1) + rank() OVER () FROM rt0 AS q1) AS ca1
             FROM rt0 AS q2
         ) AS q5
   WHERE q5.ca1=0
);

The UNION ALL optimization transforms this query into the following:

SELECT 1, q7.* 
  FROM rt0 AS q3
  LEFT JOIN (
    SELECT * 
      FROM (
             SELECT (SELECT sum(q2.c1) + rank() OVER () FROM rt0 AS q1a) AS ca1
               FROM rt0 AS q2a
           ) AS q5a
     WHERE q5a.ca1=0
  ) AS q7
UNION ALL
SELECT 2, q8.*
  FROM rt0 AS q4
  LEFT JOIN (
    SELECT * 
      FROM (
             SELECT (SELECT sum(q2.c1) + rank() OVER () FROM rt0 AS q1b) AS ca1
               FROM rt0 AS q2b
           ) AS q5b
     WHERE q5b.ca1=0
  ) q8;

If you enter the second optimized SQL directly, it works (because all of the cursor numbers are well ordered). But when the query optimizer makes that translation, it ends up with the cursor number for q2b being less than the cursor number for q5b.

Multiple things need to be fixed here:

  1. The UNION ALL optimization needs to be fixed so that it yields a parse tree where the cursor numbers of all subquerys greater than than the cursor number of any outer query.

  2. New assert() statements need to be added to check every parse tree transformation and assure that the cursor numbers are always well-ordered.

I'm working on these changes now.

Everything in this post is the result of a preliminary analysis and is subject to change or correction in follow-up posts.

2021-11-06
11:13 Reply: SQLITE_OPEN_SHARED_SCHEMA == SQLITE_OPEN_NOFOLLOW artifact: 3b6f80143f user: drh

No. Last night I mistakenly merged the reuse-schema branch into trunk when I had intended to merge trunk into the reuse-schema branch. That mistake has now been corrected.

We had not noticed the collision between the NOFOLLOW and SHARED_SCHEMA open flags. That will need to be addressed, but we'll do that on the branch.

2021-11-05
17:19 Reply: App file format, large image texture blobs? artifact: 48608cd49c user: drh

The document that best answers your question is probably:

https://www.sqlite.org/fasterthanfs.html

That study was done to figure out if it is more performant to put 10KB thumbnails in an SQLite database or as separate files on disk. (The answer: an SQLite database.)

Your situation is different in that you are using 2MB blobs instead of 10KB blobs. I'm not sure where the cross-over point is. You should run experiments to see. The source code for the performance measurement software is linked in the article above.

Is raw performance your only consideration? Do power-safe transactions, a simple single-file storage format, and a powerful query language figure into your decision at all? If raw performance is the only factor, then I'm guessing direct-to-disk will be the best choice for 2MB blobs (but that is only a guess - you should verify). But if other factors come into play, then you will have an engineering tradeoff to consider. Only you can make that decision.

12:35 Reply: Error when updating sqlite3.c using trunk artifact: d6a99c93e4 user: drh
12:03 Reply: An assertion failure in function isSimpleCount() artifact: 284602513a user: drh

Thanks for the report. Now fixed on trunk.

Wang Ke probably already knows this, but for the benefit of other readers, the NEVER() macro in SQLite basically means "we don't have a test case for this condition but we should probably check it anyhow." On debug builds, the NEVER() raises an assertion fault if it is ever true, but for release builds it is a no-op. In that way, NEVER() macros alert us to new untested branches. Release builds are unaffected.

Hence, this is not a problem in release builds.

The simplest fix here would be to simply delete the NEVER(). But I took the opportunity to also clean up the isSimpleCount() function a little. Test cases were added to TH3 and dbsqlfuzz.

2021-11-04
15:47 Reply: Question about memory management artifact: 7b851367fd user: drh

As I import csv files into databases - one for each subject I am importing - my computer memory fills right up.

Can you be more specific about what you are doing to accomplish this import? Are you using the CLI? If so, what commands are you using? Or did you write your own program/script to do the import? Can we see it?

2021-11-03
23:56 Reply: Question regarding dangling commas in CREATE VIRTUAL TABLE queries artifact: fa65232c87 user: drh

It is not a dangling comma. The examples you show all merely have the last parameter equal to an empty string. For both FTS4, FTS5, and RTREE, this results in a column named by an empty string ("").

If you are writing your own virtual tables and want to disallow this, just check the parameters and raise an error if any of them are empty. Perhaps we should have done this when we created FTS4, FTS5, and RTREE. But the time for that decision has passed. Those virtual tables have accepted and allowed column names that are the empty string for many years, and so we cannot change it now without breaking compatibility.

SQLite also allows you to create tables and columns named by an empty string. For example:

CREATE TABLE ""(a INT, "" INT, c INT);
INSERT INTO "" VALUES(1,2,3);
SELECT "" FROM "" WHERE a=1;

You are allowed to do this, but you probably ought not.

2021-11-02
15:43 Reply: gcc 11 compiler warning in sqlite3Fts5IndexQuery artifact: 1b863fcba6 user: drh

Didn't use a VM. I ended up wiping the disk and doing a fresh install of Ubuntu 20.04.

2021-10-29
14:20 Reply: License File where to find? artifact: 62845ebcc1 user: drh

Can you do us the favor of posting this question on the SEE Forum?

10:03 Reply: gcc 11 compiler warning in sqlite3Fts5IndexQuery artifact: 746635d5ba user: drh

Ugh. When I ran "apt install gcc-11" on my Ubuntu 20.04 machine in order to try to reproduce this problem, it broke the clang installation so that now I can not longer run "clang -fsanitize=fuzzer".

Yet another example of compiler warnings causing harm rather than preventing it...

2021-10-28
16:10 Reply: gcc 11 compiler warning in sqlite3Fts5IndexQuery artifact: 61173f136d user: drh

Unable to repro. The warnings do not occur on Ubuntu 20.04 with gcc-11 (Ubuntu 11-20210417-1ubuntu1) 11.0.1 20210417 using compiler options -O0 -Wall.

2021-10-26
10:21 Reply: No ".open" option in version 3.7.17 2013-05-20 artifact: 6733f1d20a user: drh

noting that 3.7 is now more than 8 years old and will be missing all sorts of more current features.

Yeah. A simpler solution would be to download and use the latest 3.36.0 CLI.

2021-10-21
17:28 Reply: Truncated Words bug with fts5vocab artifact: fb1e6b3c25 user: drh

The Porter Stemmer algorithm does that. It is a feature, not a bug. The idea of a "stemmer" is to map words that share a common root into the same base form, so that they appear to the search algorithm as the same word.

The Porter Stemmer algorithm (named for its inventor, Martin Porter), only works for English. But it has been widely used for over 4 decades and works fairly well within its domain.

If you don't want to use the Porter Stemmer, leave off the "porter" keyword when you declare the FTS5 table.

2021-10-20
13:48 Reply: BUG carray.c memcpy() buffer overflow artifact: 191ad0797f user: drh

Now fixed on trunk. Thanks for the bug report.

11:26 Reply: How do i submit a bug report artifact: d04f5871e3 user: drh

Lemon is a command-line tool. Nobody runs Lemon who does not already have full shell access on the target machine. So memory safety issues in Lemon don't really matter that much.

Lemon generates code that is used by SQLite. But Lemon is not itself part of SQLite. If you find faults in the code that Lemon generates, that is an issue. But faults in Lemon itself are scarcely a concern.

2021-10-14
14:04 Reply: Pragma_Function_List() artifact: 48b468b7e4 user: drh

The flags column is an internal implementation detail and is subject to change. But a few of the bits are fixed. From sqlite3.h:

#define SQLITE_DETERMINISTIC    0x000000800
#define SQLITE_DIRECTONLY       0x000080000
#define SQLITE_SUBTYPE          0x000100000
#define SQLITE_INNOCUOUS        0x000200000

Thus, for example, to see a list of all functions that are not allowed to be used inside of triggers and views (SQLITE_DIRECTONLY functions) you could write:

SELECT DISTINCT name
  FROM pragma_function_list
 WHERE flags & 0x80000
 ORDER BY name;

And this gives you:

┌────────────────────┐
│        name        │
├────────────────────┤
│ fts3_tokenizer     │
│ geopoly_debug      │
│ icu_load_collation │
│ load_extension     │
│ readfile           │
│ sha3_query         │
│ writefile          │
└────────────────────┘
2021-10-13
14:48 Reply: Opening a DB with SQLITE_OPEN_EXCLUSIVE artifact: cebd13c65b user: drh

That check-in has been backed out.

  1. The change only worked for unix, not Windows. Such asymmetry, while occasionally necessary, is unwelcomed.

  2. There are corner cases where it does not work on unix either. And those corner cases would be difficult to fix.

  3. We don't see a compelling need for this feature (there are better ways to accomplish the same thing) and so we do not want to risk the stability of the project in order to implement it.

12:10 Reply: binary .dump format artifact: f96d8d6001 user: drh

Is there a way to get a file similar to what .dump outputs in a binary format?

Yes. The https://www.sqlite.org/src/file/ext/misc/dbdump.c file contains a stand-alone implementation of ".dump". You can make a copy of that file and modify it to generate whatever binary output format you desire.

How do you propose to get your binary-dump format back into an SQLite database?

11:59 Reply: Build fails with SQLITE_OMIT_WAL and SQLITE_MAX_MMAP_SIZE=0 artifact: e1f858f5bb user: drh

I'd rather not remove SQLITE_OMIT_WAL from the configuration.

Why not? What harm is WAL mode causing you?

2021-10-12
14:03 Reply: sqlite3_serialize / deserialize 2GB limit on 64 bit artifact: 31a4fa7c69 user: drh

We have discussed making it work during the next release cycle. No promises, though.

2021-10-10
19:24 Reply: x86_64/amd64 sqlite3 binaries artifact: a93268ede4 user: drh

What is so hard about running "./configure && make"?

2021-10-04
11:25 Reply: Unexpected output from the SELECT artifact: e76abae426 user: drh

There are two major paths through the query planner in SQLite. The whereShortCut() path is a very quick planner that is appropriate for simple queries (only a single table in the FROM clause), and there is a slower path that does a much more detailed analysis of the query and is used for joins and queries against virtual tables and other situations where a more detailed analysis is appropriate. The problem described by this thread is essentially the same as the problem previously described by Wang Ke in forum thread eb8613976a, except that in this case the problem was in whereShortCut() rather than in the full-analysis path.

Both problems should be fixed as of check-in 8b24c177061c3836.

The changes at check-in e038ce8955e785af which the bisect above identified as the problem source actually have nothing to do with the problem at all. That change merely helped to expose the problem.

2021-10-03
10:37 Reply: Unexpected output from the SELECT artifact: 1acf29f24a user: drh

Simplified SQL:

CREATE TABLE t1(a INT);
INSERT INTO t1 VALUES(0),(3);
CREATE TABLE t2(b INT UNIQUE, c INT);
INSERT INTO t2 VALUES(1,4),(0,5);
SELECT * FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE a=c AND a IS b);

The one-row answer returned is incorrect. The correct answer is no rows. This can be seen by disabling the "transitive constraint" query optimization:

CREATE TABLE t1(a INT);
INSERT INTO t1 VALUES(0),(3);
CREATE TABLE t2(b INT UNIQUE, c INT);
INSERT INTO t2 VALUES(1,4),(0,5);
.echo on
SELECT * FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE a=c AND a IS b);
.testctrl optimizations 0x80
SELECT * FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE a=c AND a IS b);

The first of the two identical queries returns one row, but the second (with the 0x80 optimization disabled) returns zero rows.

The equivalent PostgreSQL query is:

CREATE TABLE t1(a INT);
INSERT INTO t1 VALUES(0),(3);
CREATE TABLE t2(b INT UNIQUE, c INT);
INSERT INTO t2 VALUES(1,4),(0,5);
SELECT * FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE a=c AND a IS NOT DISTINCT FROM b);

PostgreSQL gives a zero-row result, further confirming that something is wrong.

2021-10-01
13:24 Edit reply: Expired certificate artifact: 000b54e897 user: drh

This is due to the Let's Encrypt Root Cert expiration. SQLite's cert is fine. The problem is that your client has not updated its set of root certs lately.

I had the same problem with "curl" on my Ubuntu 16.04 desktop this morning. Everything else on my desktop continued to work fine, but for whatever reason, "curl" was clinging to an older set of root certificates. I solved that problem by copying the /etc/ssl/certs directory (and all of its content) from a newer Ubuntu 20.04 machine over into my desktop.

13:23 Reply: Expired certificate artifact: fbe060e94e user: drh

This is due to the Let's Encrypt Root Cert expiration. SQLite's cert is fine. The problem is that your client has not updated its set of root certs lately.

I had the same problem today with "curl" on my Ubuntu 16.0.4 desktop this morning. Everything else on my desktop continued to work fine, but for whatever reason, "curl" was clinging to an older set of root certificates. I solved that problem by copying the /etc/ssl/certs directory (and all of its content) from a newer Ubuntu 20.04 machine over into my desktop.

2021-09-29
14:52 Reply: HTML generation issue in sqlite.org's search engine artifact: 76399be9f9 user: drh

I was not able to reproduce the anomaly with a quick check. Do you have an example of a search that generates the erroneous behavior?

13:14 Reply: HTML generation issue in sqlite.org's search engine artifact: 726e410466 user: drh

Should be fixed now.

00:08 Edit: User-defined functions in the CLI? artifact: 074d0a3dd6 user: drh

Is it possible to use sqlite3_create_Function to create user-defined function in the CLI?

2021-09-27
10:48 Reply: Test SQLITE_NOMEM result code on Linux (Memory : 8Gb) artifact: 4203b0ca76 user: drh

You can register your own memory allocator using the [sqlite3_config(SQLITE_CONFIG_MALLOC, ...)1 interface. This is what the various SQLite test harnesses do. Normally calls to the custom malloc fall through into system malloc, but it is possible to inject faults. There is an example of doing this in the CLI source code.

You might also set the PRAGMA hard_heap_limit to some small value in one connection, and then open a separate connection. The hard_heap_limit applies to the entire process, not to individual connections.

2021-09-23
15:12 Reply: Best way to use carray artifact: b8eaa5b6b8 user: drh

Indexing will happen in the SQLite core, not in the carray extension. Indexing will occur if you create an index:

CREATE INDEX foo_sid ON foo(sourceId);
2021-09-22
19:35 Reply: cannot start a transaction within a transaction using sqlite shell artifact: 2507664507 user: drh

I'm guessing:

  1. First window does: BEGIN; INSERT INTO ...;

  2. Second window does DELETE FROM ...; with an implicit COMMIT.

  3. First window tries to do: END;. But because END is just an alternative spelling for COMMIT, it cannot, because the second window jumped in line ahead of it. So the COMMIT fails, leaving the transaction open.

  4. All subsequent BEGIN operations fail because there is already a transaction open.

Possible solutions:

  • Use "BEGIN IMMEDIATE" instead of "BEGIN TRANSACTION".

  • Put the BEGIN and the END on separate input lines. I think (but I'm not sure - I'll need to check the code) that the CLI works a line at a time. So if the BEGIN fails, the END never has a chance to run. Larry has been in that code more recently than me → perhaps he can answer off the top of his head.

14:39 Reply: sqlite3_exec: 5th parameter artifact: 69c418c0a0 user: drh

The error message is only set and meaningful if a non-SQLITE_OK code is returned.

Note that the error message might not be set for SQLITE_NOMEM too.

12:58 Reply: vfs write amount size without WAL artifact: e97853e517 user: drh

SQLite itself will only write to the database file page-size chunks that are aligned to a page boundary.

But beware: Intermediate VFS shims might change that. Also, an application could, if it wanted to, acquire the sqlite3_file object for the database and then invoke the xWrite method with any parameters it wants.

So, in other words, it would be reasonable to return SQLITE_IOERR or some other error code if you get an iAmt that is different from the page size, but it would not be reasonable to assert().

00:24 Edit: Question about UPDATE with LIMIT artifact: 062a215114 user: drh

we have the following sqlite command that works:

UPDATE feeder SET quantity = quantity + 1 WHERE quantity < 200 AND available = 1 RETURNING id, food_weigth, quantity

but we would like to add a limit 1 since we don't want to update all the feeders

something like (it doesn't work):

UPDATE feeder SET quantity = quantity + 1 WHERE quantity < 200 AND available = 1 LIMIT 1 RETURNING id, food_weigth, quantity

nyl

00:22 Edit: Question about UPDATE with LIMIT artifact: 1e1ca3b821 user: drh

we have the following sqlite command that works:

UPDATE feeder SET quantity = quantity + 1 WHERE quantity < 200 AND available = 1 RETURNING id, food_weigth, quantity

but we would like to add a limit 1 since we don't want to update all the feeders

something like (it doesn't work):

UPDATE feeder SET quantity = quantity + 1 WHERE quantity < 200 AND available = 1 LIMIT 1 RETURNING id, food_weigth, quantity

nyl

2021-09-16
20:20 Edit reply: Javascript enforcement artifact: 4f158f4697 user: drh

It is still surprisingly effective, as most robots do not simulate mouse movements, and the hyperlinks typically do not appear until after the mouse moves.

Automatic robot blocking is useful, because it means I have to spend less time blocking robots and hence have more time available to do actual programming.

19:59 Reply: Javascript enforcement artifact: 6afeb7df24 user: drh

It is still surprisingly effect, as most robots do not simulate mouse movements, and the hyperlinks typically do not appear until after the mouse moves.

Automatic robot blocking is useful, because it means I have to spend less time blocking robots and hence have more time available to do actual programming.

More ↓