SQLite Forum


50 most recent forum posts by user sqweek

21:51 Reply: Unable to read while database is locked (artifact: 474e6c3806 user: sqweek)

It's more nuanced than that. In rollback journal mode (ie. the default) there can only be one writer at any time, but concurrent reads are still allowed until the writer is ready to commit its changes.

The write-path escalates through these phases:

  1. Acquire RESERVED lock. This indicates intent to write to the database and will fail if a concurrent write transaction is already in progress. RESERVED does not affect read transactions.
  2. ... make changes to tables/indices/etc ...
  3. Acquire PENDING lock. This indicates that a transaction is ready to be committed. Read transactions can no longer start once PENDING is acquired.
  4. Acquire EXCLUSIVE lock. This will fail if any read transactions are still active, so often a writer has to repeatedly retry at this point. But it must have the EXCLUSIVE lock to safely update the main database file.

Full details are here: https://sqlite.org/lockingv3.html

A consequence of the locking protocol is that any long-running transaction performed by an app has the potential to cause read-transactions to fail with "Database is locked", because if a write-transaction comes in during the long-running transaction, everything else using the DB will ultimately have to wait for the long-running transaction to finish (so the write-transaction can commit its changes). The application itself is in control of this delay though, as it chooses how persistent it should be in acquiring the EXCLUSIVE lock vs. giving up and aborting the write transaction.

So if you want things to work smoothly, focusing on short transactions is a good rule of thumb.

Other mechanisms to consider in this area are:

  • sqlite3_busy_timeout() which controls how long the library should transparently retry when encountering an SQLITE_BUSY error (aka "database is locked")
    • note this does not apply to all lock acquisitions - in particular escalating from SHARED to PENDING (ie. starting a write transaction) will fail immediately if PENDING is already held, because there is no point waiting in this scenario. BEGIN IMMEDIATE can be useful to avoid wasted effort arising from contention here.
  • PRAGMAs cache_spill and cache_size -- in the case of a large write transaction, the changes made to the database may exceed sqlite's configured memory cache. If this happens the cache "spills" and the writer will perform the PENDING -> EXCLUSIVE transition long before the transaction is ready to commit. This can result in a much larger window where read-transactions will fail with SQLITE_BUSY.
09:55 Reply: Serious error emptied my complete database! (artifact: 9851ce24d6 user: sqweek)

Note this forum is specifically for sqlite the library, and support for other software such as sqliteexpert cannot be expected here. Looking at the docs I see little to no scope for an "ALTER TABLE ADD COLUMN" command to result in data loss.


However if sqliteexpert is using the more generic approach to implement your requested schema change (see section 6), there is clearly potential for rows to not be transferred to the new table. The overall option should still be "safe" in the sense that you can roll back the entire transaction, but if an error in the process is ignored/overlooked and it proceeds to COMMIT instead of ROLLBACK I can see this being a result.

It looks to me that the analyzer utility still sees some "consumed bytes", but I will attach the output in a separate reply.

The database that you ran the analyzer on has no data -- it is gone, kaput. The only table containing any bytes of payload is SQLITE_SCHEMA (the contents of which are what they sound like).

Could it be that my db have become completely empty due to a problem with it being stored on a kind of "OneDrive" directory? Everything in this directory is uploaded automatically to my cloud storage which also makes my daily backups.

I certainly wouldn't rule it out. Without a thorough analysis of how the cloud drives interact with file locking primitives I wouldn't trust any of them to be sufficiently robust to guarantee atomicity and durability for an sqlite database.

I wanted to try the .vacuum but it is not recognized anymore in the sqlite3.exe?

It's a proper SQL command, not a shell dot-command. ie. it is spelled "VACUUM;" rather than ".vacuum"

03:24 Reply: Trying to "Open Database" Safari History.db file and get Error: "could not open safari database file reason: unable to open.. (artifact: d264a96f01 user: sqweek)

SQLite itself does not have error messages like 'unable to open database file'.

sqlite3.c:144789:    /* SQLITE_CANTOPEN    */ "unable to open database file",
05:28 Reply: Backup API with network filesystem copy (artifact: a4e8d43d84 user: sqweek)

Note that the backup API may have to restart the process if the source database changes -- unless you ask it to copy all pages in a single call in which case it will hold a read-lock on the source DB for the duration of the backup.

This gets more relevant if the destination is elsewhere on the network, as the pager will still be writing one page at a time which is unlikely to be an optimal blocksize for non-local latencies. ie. In general I would expect .backup to a local disk followed by a typical network transfer to be competitive if not faster than a straight .backup to the network (and certainly it would mean a shorter read-lock on the source DB).

05:09 Reply: Sqlite connection management with multiple client access? (artifact: 335b079ecb user: sqweek)

Just to check, is the DB being stored on a local disk? ie. are the backend service and frontend UI running on the same machine? WAL mode only works properly when the DB is accessed via a single machine.

Otherwise, the likely candidate is poor transaction management. After a transaction starts, any changes to the DB made by other processes will not be visible until the transaction has finished. So all it would take is one dangling transaction to exhibit this symptom.

Transactions in sqlite are a connection-level concept, so I'd also make sure you aren't sharing the same connection between multiple UIs.

09:20 Reply: Database file name encoding -- is it really UTF-8? (artifact: 02798ac965 user: sqweek)

Unix/Linux (and other UTF-8-oriented systems) probably have to defend themselves against the security concerns raised by overlong UTF-8 sequences, as for example overlong forms of the path separator /

UTF-8 mandates the shortest-encoding, ie. overlong forms are erroneous (and you will no doubt end up with one or more U+FFFD code points trying to decode them). It is also quite particular about the valid bit-patterns at each byte position -- based on a lone byte anywhere in a UTF-8 stream you can tell whether you are (a) looking at a codepoint in the 7-bit ASCII range (b) at the start of a new codepoint (and also how many bytes long the codepoint is) or (c) in the middle of a code-point.

Most UTF-8 APIs in the linux world pass strings around as byte streams, with specific functions or iterators to deal with code points only when necessary.

06:49 Reply: Method to temporarily disable Not Null constraint? (artifact: 9e28a9e446 user: sqweek)

Can you just disable/defer foreign key constraints and insert everything in a single pass?

It seems to me that if you just blat the contents of each table straight from source to destination you'll be good to go. The only reason I can think that this won't work is if you're changing entity IDs during the data migration.

03:01 Reply: Random crash doing SELECT (artifact: e6801cab6d user: sqweek)

It sounds like you are sharing connections between threads, in which case using SQLITE_OPEN_NOMUTEX is an especially bad idea. Even if you think you can prove that only one thread is using a particular connection at a time, the mutex might be the only thing enforcing a memory barrier.

10:10 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: edf33cb169 user: sqweek)

With the amount of time spent so far explaining the internal workings of sqlite's utf8 handling (which is not under fire in this year-old thread), I estimate that the tcl sqlite extension could have been patched at least 172 times so that it no longer inserts modified utf8.

The only complication is that changing the serialisation format may constitute a breaking change to the extension's API, or at least a weird subtle surprise for any tcl users who have existing databases that do containing modified utf8 strings with embedded NUL characters, as a round trip from DB(modified utf8) -> tcl -> Tcl_UtfToExternal -> DB(standard utf8) will change the length of such data cells.

Then there's the question of what happens when someone runs an old version of the tcl extension against a DB containing the new serialisation format...

04:16 Reply: SQLite diff tool doesn't work with CRLF new line characters (artifact: a47c052b7b user: sqweek)

Obviously they are not identical, the question is why are they not identical after following the instructions that sqldiff itself gave?

It may be that the CRLFs are lost/converted when capturing or running the query -- OP what process are you using to apply the changes specified by sqldiff?

02:55 Reply: allowing reading by another process while inserting many entries? (artifact: 47ef9d108b user: sqweek)

In rollback-journal mode, other processes will still be able to read from the db while the INSERT statements are progressing. There is however a period during COMMIT where other processes are blocked from reading (their attempts will fail with error code SQLITE_BUSY). They are only blocked for a short time, just long enough for the INSERTing process to write and fsync the database pages which have been changed.

At least, this is how it works for transactions that fit in SQLite's page cache (which is held in memory). For larger transactions, processes will be blocked from reading the db from the point that the cache spills until COMMIT finishes, which can be a much longer period than outlined above. The cache behaviour can be tweaked with the cache_size and cache_spill PRAGMAs.


The other thing which can cause processes to be blocked from reading for an extended period of time is if there is a long-running read transaction happening at the time a writer wants to COMMIT. The writer has to wait for the reader to finish before it can update the database, and new readers are not allowed to start in the meantime (to avoid writer starvation).

09:26 Reply: SIGBUS errors in HHVM (artifact: 333a213cba user: sqweek)

The DB is stored on a disk that has plenty of free space left.

Is it a local or network disk? File i/o via mmap can result in SIGBUS if accessing a network filesystem that is experiencing problems.

(possibly this is a reflection of the more general issue that the mmap API does not provide any way for I/O errors to be reported)

04:39 Reply: Bug A NULL pointer dereference bug was discovered in SQLite (artifact: ce1193be15 user: sqweek)

The NULL dereference happens if sqlite3_initialize fails. In the context of sqlite3MemdbInit at least this will never happen, because sqlite3MemdbInit is only called by sqlite3_initialize (and recursive/subsequent calls to sqlite3_initialize are no-ops).

The most recent copy of the sqlite3 source I have on hand doesn't include sqlite3_appendvfs_init so I can't check whether that one is in a similar situation :)

I haven't analysed timeOfDay for the same reason, although it does seem like it's worth accounting for sqlite3_vfs_find(0) returning NULL in this context even if it's not currently called without a successful sqlite3_initialize.

09:14 Reply: memory vs mmap (artifact: a787c5b1ee user: sqweek)
linux client: "yo fileserver I'm opening this file because I'm interested in reading its contents"
fileserver: "roger that, you want any data in particular?"
linux client: "give me 16384 bytes starting from byte 0"
fileserver: "here you go"
linux client: "thx, btw I'm caching this"
... time passes...
fileserver: "hey just fyi another client modified this file you're interested in"
linux client: "roger that, I'll invalidate my cache"

I'm not an nfs protocol expert and don't know if it looks anything like this, it's just a simple example of "technically possible."

06:14 Reply: memory vs mmap (artifact: 5c0e6a16d2 user: sqweek)

Presuming that the DB is legitimately read-only (ie. it's not accidently/unintentionally being modified and thus fsync is not being called), I wouldn't expect immutable mode to make a significant performance difference.

I would certainly expect linux to keep the file contents in its kernel page cache, and I have observed that behaviour on the systems I work with when dealing with unchanging files on network storage. This is not really related to sqlite though, as you say.

It might be worth running a baseline test against an ordinary file to see if your nfs is broken in general or just when dealing with sqlite DBs, if that hasn't yet been established (I couldn't see a clear conclusion with a quick scan of the thread).

03:47 Reply: memory vs mmap (artifact: 6d4ea05b40 user: sqweek)

The fundamental issue with networked disk vs. local disk is latency. This can be accomodated in many workloads by queuing up i/o requests (see also readahead, asynchronous i/o), however any application which relies on the idiomatic read/write syscalls will suffer somewhat against a network fs especially if it is seeking around.

sqlite's workload is very seek heavy and there's no reasonable way to anticipate which pages of the database file will be needed in advance, so it is especially affected by latency.

I would expect Linux needs some special configuration option to say "this network file is static, please cache it whenever it is opened".

Yes, many network caching options and configurations exist, each with various features/compromises. But sqlite calls fsync() regularly to ensure database consistency and this likely bypasses any advantage you might get from caching.

If the DB legitimately never changes you can tell sqlite when opening it to use immutable mode, and with a sufficiently large page cache that should ensure the DB isn't read more than once. However in that situation I would also question what benefit the DB is offering over eg. a flat file stored in /dev/shm

04:23 Reply: You don't know what people are using your database for (artifact: 24e7d8a90d user: sqweek)

There should be a separate field for the passenger's mass

Haha, is "you must weigh yourself to complete user registration" the next evolution of captcha?

Giving users control over this seems almost worse from an aviation engineering perspective, especially if "a simple flaw in an IT system" happens to allow negative numbers to make it into the database XD

07:05 Reply: Trouble writing/retrieving a small UTF-8 GIF image. (artifact: 2a1c21dbb7 user: sqweek)

mime type "image/gif;charset=utf-8"

This is one of the most bizarre claims I've ever seen; GIF is a binary interchange format and utf-8 is a text encoding. I mean ok the binary data could be base64 encoded, which would kind of make sense as I can't see how else you could embed a GIF in JSON, but judging by your other observations this is not the case:

Writing the object to the console before sending it, displays many symbols. but the GIF data is only about 13 characters long. Even strlen() in C is 13 characters.

I'd wager you're looking at a binary GIF dump including embedded NUL characters (ie. '\0' in C). I'm not a master of web technologies but I'm 99% sure it's not legal to just shove that into a JSON field :)

sqlite will happily store binary data, but you need to use the correct APIs (ie. treat it as blob data and never string data).

The JSON sent back to the browser is built in C using sprintf() and %s is used for the GIF from sqlite3_column_text().

The length of C strings is determined by the presence of a terminating NUL character. You cannot use C string functions such as printf in conjunction with binary data.

but how can I tell whether or not the GIF was written completely?

"SELECT length(column_name) FROM table_name" <- the length will reflect the actual amount of data stored in the database cell. If that returns 13 for this row then the full data was not written.

03:33 Reply: Strange CLI .exit issue (artifact: b513c5b7f4 user: sqweek)

ITYM and nobody saw the fo11y of it!

08:12 Reply: Potential Bug (Record Leaking) (artifact: 509238975f user: sqweek)

It's a curious observation but I'm struggling to appreciate its applicability as any sort of attack. If you have the ability to manipulate the database's underlying record structure, why don't you just read the data of interest directly? From the operating system's perspective you must have the ability to read the entire file by definition, otherwise sqlite wouldn't be able to read it...

I'd be interested to know whether the integrity_check or quick_check PRAGMAs notice that the records overlap after munging them like this.

05:38 Reply: Runtime of and opcode counts of ANALYZE (artifact: 74f94985b2 user: sqweek)

What's the difference between the output if you run these commands in the sqlite3 shell on (a) the 20GB/7 minutes to analyze DB and (b) the 18GB/22 minutes to analyze DB?

.stats on
.timer on

(I guess also repeat this for PRAGMA integrity_check(100))

04:49 Reply: Avoiding SQLITE_BUSY during recovery (artifact: 683d3cf5d8 user: sqweek)

I went and had a look in the source, and it seems like opening a database doesn't count as 'reading from a database' and triggering the recovery, instead being triggered by the first transaction. Hence, if there is a hot journal, what I thought was going to be a read-only transaction could acquire an exclusive lock and other concurrent transactions would then see SQLITE_BUSY. Oops.

Yes that is correct, in rollback-journal mode a read-only transaction will acquire an exclusive lock to write to the database if a hot-journal is detected.

I would think you could anticipate this pretty easily by (a) forcing a trivial read transaction as an exclusive operation upon opening a database for the first time or (b) treating the initial operation as a write even if it looks like a read-only transaction.

Of course you will still encounter SQLITE_BUSY if someone is poking around the DB with eg. the sqlite3 command line tool or some other process which isn't aware of your communication protocol :)

(I won't comment on WAL journal mode as I have no experience there)

01:25 Reply: create sqlite3_analyzer from trunk (artifact: 2d0cecb427 user: sqweek)

tcl8.6-dev is definitely needed. Many distros these days split software packages up into "files you need to run the software" and "files you need to compile & link code against the software." tclConfig.sh falls in the latter category - tcl works just fine without it until you want to compile something against it.

There's no reason you should need to modify PATH or uninstall the tcl8.6 base package though (tcl8.6-dev depends on it so apt would have reinstalled it straight away).

16:22 Reply: Concurrency for in memory database (artifact: e6ef5c954a user: sqweek)

You might want to check this one:


But in the default serialized mode I'm pretty sure the mutexes involved are per-connection, so I wouldn't expect the threading mode to inhibit multiple threads which are using separate connections.

It's possible that in-memory databases don't follow exactly the same locking protocol (which is usually based on file locking primitives) and need to be more conservative about reader access. Normally the database pages held in memory provide a kind of private "staging area" for the connection to collect modifications before committing them to the database proper, I imagine in-memory databases don't have this extra layer of insulation between connections (especially in shared-cache mode).

23:27 Reply: SQLite temp file is removed but still kept by java process (artifact: e3500708fa user: sqweek)

I believe the output of lsof in the github issue is showing a bunch of threads within the same java process, rather than separate processes. Each thread of course shares the same file descriptor table, and it appears that lsof makes no attempt to be smart here and simply repeats the details of each file descriptor for every thread.

The size column is actually "SIZE/OFF" - the fact that it's changing throughout the output might tell us that the tempfile is associated with a transaction that is still actively doing something?

Here's the lsof column header, for reference:

COMMAND     PID   TID           USER   FD      TYPE             DEVICE   SIZE/OFF                NODE NAME
22:56 Reply: Interesting project https://github.com/facebookincubator/CG-SQL (artifact: c7b14b527c user: sqweek)

Seems the cql build itself doesn't require pthreads. -lpthreads is mentioned in the Makefile but only used for the test targets.

That said I'm using gcc from MinGW-W64 and a basic pthreads test does work. It provides a small libwinpthread-1.dll (~51K) but that's the only dependency of the resulting executable.

12:10 Reply: Interesting project https://github.com/facebookincubator/CG-SQL (artifact: 029bb04b3f user: sqweek)

What makes you say it's unix-only? Considering the tool is effectively reading some text files and spitting out other text files I was stunned by the notion of it being bound to a particular OS and wanted to know what atrocities the code must be committing, but lo and behold it builds and runs on windows.

I mean it definitely wasn't the smoothest build I've ever experienced, with the main pain point being finding compatible versions of bison/flex that run on windows (I only tried one version before giving up and running those build steps using bison/flex from WSL). But I've had worse build experiences just trying to tell cmake where I built a custom version of a dependent library XD

02:54 Reply: SQLite temp file is removed but still kept by java process (artifact: 0d872bdcea user: sqweek)

LSOF lists open files so the file cannot be deleted because it is open.

It has been marked for deletion and will be deleted by the Operating System when it is closed.

On unix systems you can certainly unlink files while they're open - any processes which have an open file descriptor can still read/write the data and technically it's still on disk, but it's gone from the filesystem and cannot be opened any longer.

I'm almost certain you know this Keith, so I'm not sure why you're being so unnecessarily difficult here. Commonly we refer to such files as "deleted," which is coincidentally the same nomenclature lsof uses to flag file descriptors referring to inodes which are no longer reachable via the filesystem.

03:47 Reply: What is the poll frequency of `sqlite3_busy_timeout`? (artifact: a6fa786647 user: sqweek)
sqlite3* db = sqlite3_open ....
sqlite3_busy_timeout(db, <value>);
sqlite3_busy_handler(db, sqliteDefaultBusyCallback, db);

Note that the call to sqlite3_busy_handler in this recipe is redundant -- sqlite3_busy_timeout is implemented via a call to busy_handler:

SQLITE_API int sqlite3_busy_timeout(sqlite3 *db, int ms){
  if( ms>0 ){
    sqlite3_busy_handler(db, sqliteDefaultBusyCallback, (void*)db);
    db->busyTimeout = ms;
    sqlite3_busy_handler(db, 0, 0);
  return SQLITE_OK;

To try and clarify the OP's understanding - sqlite invokes the busy callback as soon as a lock cannot be obtained, and it's up to the callback whether to sleep, update a GUI, abort immediately, etc etc. The busy_timeout mechanism is just there to provide a convenient default callback - if an app provides its own callback then the busy timeout is irrelevant.

02:31 Reply: Dump and online backupjexclude tables (artifact: 56a47a4a13 user: sqweek)

The sqlite3_backup API operates on the page level, and I'd wager it treats the data within each page as opaque rather than parsing any of it. As such, I'd expect it doesn't know which table a page belongs to, or anything about tables at all.

Theoretically it wouldn't be a huge effort for sqlite to infer which pages are relevant for a given set of tables/indices and setup a partial-backup based on that subset, however unless you start rewriting page data (and the embedded references to other pages) you'd still end up with a database of the same size just with a bunch of empty pages (which is still potentially advantageous if you use a compressed filesystem).

OTOH if you do start parsing and rewriting page data, this is sounding less like a backup and more like a VACUUM INTO.

Of course with any approach that subsets you end up with a backup which cannot be directly swapped back in to replace the live database, ie. you must have some recovery procedure in place to regenerate whatever was filtered out of the backup. As such, what is the downside to the .dump {relevant-tables} approach?

08:58 Reply: Debugging opcodes in vdbe.c (artifact: 2990317f68 user: sqweek)

Note you can also use the stdbuf tool in the shell to control stdio buffering within pipelines. The default behaviour depends on the tool, but for those which use glibc's stdio (FILE* and friends) it goes like this:

(a) if stdout is a tty, then output is buffered on a line-by-line basis (b) otherwise, output is buffered in fixed size chunks (IIRC the default is 1MB)

Which is why things can change when you tack | tee on the end of a command vs. having it output straight to the console. In this case you can prepend stdbuf -oL to whichever command is buffering to force stdout back to line buffered.

For a quick demo, start with a simple loop emitting one message per second and log timestamps when they are received:

$ for i in $(seq 1 3); do echo $i; sleep 1; done | awk '{print strftime("%F %T"), $1}'
2020-09-25 16:48:13 1
2020-09-25 16:48:14 2
2020-09-25 16:48:15 3

Now add a filter for odd numbers: sed -n '/[13579]$/p':

$ for i in $(seq 1 3); do echo $i; sleep 1; done | sed -n '/[13579]$/p' | awk '{print strftime("%F %T"), $1}'
2020-09-25 16:49:18 1
2020-09-25 16:49:18 3

Note nothing is printed for three seconds, and then both lines have the same timestamp. sed is buffering the output -- rude. stdbuf to the rescue:

$ for i in $(seq 1 3); do echo $i; sleep 1; done | stdbuf -oL sed -n '/[13579]$/p' | awk '{print strftime("%F %T"), $1}'
2020-09-25 16:50:10 1
2020-09-25 16:50:12 3

It's done this way because block buffering enables better pipeline performance compared to line buffering, but it's such a subtle thing to have tools changing behaviour like this that I feel GNU userspace would be simplified so much if block buffering was strictly opt-in...

07:38 Reply: Network filesystems for sqlite DBs (artifact: 4400357802 user: sqweek)

The loop is constantly obtaining and releasing the lock on the database (mgr->beginTransaction() calls "BEGIN EXCLUSIVE TRANSACTION"). The idea is to introduce as much contention as possible to see if the transaction locking logic breaks down under stressful circumstances.

Using BEGIN EXCLUSIVE bypasses half of sqlite's locking protocol - a more stressful test would be to use regular BEGIN. You also probably have to adapt your code to do that, as (a) sometimes the UPDATE will get an SQLITE_BUSY error, in which case you'd need to ROLLBACK and retry the transaction to maintain the correct counter value, and (b) sometimes the COMMIT will get an SQLITE_BUSY error, in which case you can either ROLLBACK as above or keep trying the COMMIT (which will succeed once the readers relinquish their locks).

However note the "keep trying" strategy is affected by a horrific bug in sqlite versions 3.17.0 to... 3.32.2? (can't tell from the ticket exactly what version it is fixed in)

06:40 Edit reply: BUG: Cannot pass position parameters into UPDATE statement (artifact: 025bd8d351 user: sqweek)

Read Keith's answer more closely. He's using python level formatting (via the % operator) to insert the column name into the query, and leaving the column values as question marks/bound parameters to be handled by sqlite.

Note he's also dropped the single quotes you had around your second question mark, so that it is treated as a bound parameter rather than a literal string containing a question mark.

06:39 Reply: BUG: Cannot pass position parameters into UPDATE statement (artifact: 530bc5b084 user: sqweek)

Read Keith's answer more closely. He's using python level formatting (via the % operator) to insert the column name into the query, and leaving the question marks as bound parameters to be handled by sqlite.

Note he's also dropped the single quotes you had around your second question mark, so that it is treated as a bound parameter rather than a literal string containing a question mark.

06:34 Reply: BUG: Cannot pass position parameters into UPDATE statement (artifact: c534289f82 user: sqweek)

This one works because it is a proper use of bound parameters. The purpose of the question marks is to be able to say to SQLite (or other database engine): "Hey here's a template of the query I want to run; I don't have the actual data just yet but go ahead and compile the query and I'll provide the data (via binding) later before executing it"

For this to succeed, the template query cannot have wildcards which would affect its compilation. For your INSERT INTO ASSETS statement this is true, but for your original example it is not:


If the column which is being set changes, that also changes the compiled query. Therefore the first ? is not in a valid position. ie. you can replace a literal with a ? and bind the value in later, but you cannot do this for all syntactic elements. Does that make sense?

Of course you don't really care about the compilation step, because the API you're using does the query compilation and execution all in one step. Using bound parameters still has another advantage though, because your fix in the ticket causes another bug:

sql_update = "UPDATE ASSETS SET {0} = '{1}' WHERE ID = {2}".format(updateColumn, setValue, updateIndex)

This opens you up to SQL injection attacks. If a malicious user provides a value of "haha'; DROP TABLE ASSETS; --" then you end up building an SQL query which looks like:

sql_update = "UPDATE ASSETS SET username = 'haha'; DROP TABLE ASSETS; --' WHERE ID = 12"

Which is likely to delete all your assets upon execution.

05:28 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: 9359f2a979 user: sqweek)

I absolutely agree that SQLite's VDBE should not be interfering with or imposing restrictions on the encoding used by client(s).

But the tcl bindings are in a perfect position to encapsulate this tcl implementation detail, arguably they should be doing so. And since the bindings are also part of the SQLite distribution it's not unreasonable to say this is "SQLite's job".

02:18 Reply: Suggestion: Use strong hashes on the download page (artifact: bdade8a70b user: sqweek)

The only use of the hash is to check a download from a mirror.

Right, but if a weak hashing algorithm is in use then mirrors are free to publish whatever crafted code they like and go "hey look the file matches the official hash therefore you can trust it."

06:59 Reply: Any document for varint functions such as sqlite3PutVarint and sqlite3GetVarint (artifact: f1afdf85d4 user: sqweek)

The varint encoding is described in the file format documentation: https://www.sqlite.org/fileformat.html

A variable-length integer or "varint" is a static Huffman encoding of 64-bit twos-complement integers that uses less space for small positive values. A varint is between 1 and 9 bytes in length. The varint consists of either zero or more bytes which have the high-order bit set followed by a single byte with the high-order bit clear, or nine bytes, whichever is shorter. The lower seven bits of each of the first eight bytes and all 8 bits of the ninth byte are used to reconstruct the 64-bit twos-complement integer. Varints are big-endian: bits taken from the earlier byte of the varint are more significant than bits taken from the later bytes.

05:35 Reply: Possible bug in .read when a directory is specified? (artifact: 7d0704a798 user: sqweek)

Historically I recall observing the same thing, that you could even cat a directory and get garbage printed to your terminal. However I think it was widely regarded as a misfeature and there's generally been a move towards the read syscall itself returning EISDIR as noted elsewhere in the thread. I've just tested linux (3.10.0-693.5.2.el7) and OpenBSD (6.4) and they both display this behaviour.

OSX (Darwin kernel 19.5.0) reports "Operation not supported" trying to cat a directory but I'm not sure how to check the syscalls here.

02:03 Edit reply: 'No such table' returned while the database contains it (artifact: 13bdd6084d user: sqweek)

Docker uses overlay, the underlying filesystem is ext4

Does the DB file, or its paired -journal file exist on the host filesystem before docker spins up? And are both threads running in the same docker instance? (I think they belong to the same process but just checking)

I can only reproduce this by running the same test a few times (sometimes the first one is enough) on a rather powerful machine that's usually used for our CI. Not sure if that has to do with its speed or if it's related to CPU features. For what it's worth, I ran all tests in a docker image (and can't do otherwise on that machine)

Can you reproduce the issue if you run the same docker image on a different machine?

Does the app perform further DB operations (in any thread(s)) after hitting the error? Either explicitly or as part of the runtime's cleanup.

Wrapping your app with strace -f -e trace=file,read,write,fsync,fcntl,fcntl64 might be informative.

02:00 Reply: 'No such table' returned while the database contains it (artifact: 9f6eb2cd28 user: sqweek)

Docker uses overlay, the underlying filesystem is ext4

Does the DB file, or its paired -journal file exist on the host filesystem before docker spins up? And are both threads running in the same docker instance? (I think they belong to the same process but just checking)

I can only reproduce this by running the same test a few times (sometimes the first one is enough) on a rather powerful machine that's usually used for our CI. Not sure if that has to do with its speed or if it's related to CPU features. For what it's worth, I ran all tests in a docker image (and can't do otherwise on that machine)

Can you reproduce the issue if you run the same docker image on a different machine?

Does the app perform further DB operations (in any thread(s)) after hitting the error? Either explicitly or as part of the runtime's cleanup.

Wrapping your app with strace -f -e trace=file,read,write,fsync,fcntl might be informative.

03:47 Reply: Resetting id count (artifact: e22bbb0c85 user: sqweek)

Probably you haven't created any tables that use AUTOINCREMENT columns in that database yet.

09:27 Reply: Can VACUUM rollback the db? (artifact: c3b654146c user: sqweek)

I'm not sure what you mean by "eol of the transaction" but you might be interested in this documentation:


If an sqlite3 object is destroyed while a transaction is open, the transaction is automatically rolled back.

02:53 Reply: Can VACUUM rollback the db? (artifact: f8e8969f30 user: sqweek)

VACUUUM is just a regular transaction, so assuming a sane environment/filesystem which provides the expected locking/sync semantics this shouldn't be possible.

If this is the only concurrent access to the DB performed by your app, then it's possible that assumption doesn't hold on this customer's machine but the problem doesn't manifest in normal usage.

Now I can understand if the db was closed before the compacting and a transaction was lingering then this would make perfect sense

This scenario doesn't really make sense, because sqlite cleans up when closing a DB (assuming sqlite3_close is called and returns SQLITE_OK). The only time you should end up with a lingering transaction (aka hot journal) is if sqlite is unceremoniously killed or the machine crashes, and in that case you are correct that any connection to the DB will roll back said transaction.

If the DB was not actually closed (eg. sqlite3_close returns SQLITE_BUSY because there are still unfinalized statements associated with the connection), then the connection would remain open and would still be holding locks that should prevent VACUUM from proceeding.

02:35 Reply: Query planner fails to use obvious index (artifact: fbbcddbcce user: sqweek)

Would not scanning the whole index always be beneficial in that particular case ?

My reasonning is that scanning the index is going to involve at worst the same number of rows, and at best a much lower number of rows.

I was thinking the same thing, but after reading Keith's post closer I think his point is that the best case for the table scan makes it a very attractive option.

Just to remind myself of the query: select min(id) from test where key_id >= 1;

If you decide use the index to look for key_id >= 1 and then find the minimum id amongst those rows, you are obliged to search the whole index to satisfy the query (well, you can skip the key_id < 1 portion).

OTOH, if you start scanning the table in order of the id column and check for a key_id >= 1, you can abort the scan as soon as you find a matching row because you've found the correct answer. So at best, the table scan actually needs to read less rows.

That tradeoff gets less desirable as the table grows, and obviously in your case the query planner hasn't made a great choice (or you wouldn't be here). So there may be potential here for some heuristic improvements, or it may be that sqlite is able to make a better decision if it has more information about the data (via ANALYZE as Keith suggested).

03:13 Reply: Decimal128 (artifact: 720d1651e3 user: sqweek)

The epsilon discussions make me wonder if anybody ever did argue about how many angels could dance upon the head of a pin.

Anybody in stilletos can answer that old thing:
It's one for the right foot one for the left
Half an angel per pin at best
Add wings add heart add harp all set

("Velodrome" by Dessa)

11:56 Reply: unable to open a database file in a btrfs readonly snapshot filesystem (artifact: 63dd2d06b6 user: sqweek)

Is there a -journal file in the snapshot next to it? That's also part of the database.

To guarantee integrity of the database, there are situations where sqlite must write to the main file even if you are performing read only operations. For example, imagine that the btrfs snapshot is taken halfway through a COMMIT operation. At this point the main database file is partially updated, but it may be in an inconsistent state. To safely access the database, sqlite has to use the rollback journal to undo the partially written transaction at which point the database is once again guaranteed to be in a consistent state. Obviously on a read-only file system attempting to write the main file will fail, resulting in a failure to open the database.

If you copy both the database and the journal out of the snapshot it will work because sqlite can write the main file.

If you copy only the main file out of the snapshot then sqlite doesn't see the -journal file and doesn't know that the database was mid-transaction. The resulting file may work correctly, or you may end up with a corrupt database, or even a database which doesn't seem corrupted but has incorrect results (eg. half a transaction committed).

(note that this mechanism is how sqlite provides durability in the case of a system crash -- a snapshot partway through a transaction is exactly analogous to a crash)

04:29 Reply: Is it safe to VACUUM on a new database connection? (artifact: 1440dae4e8 user: sqweek)

Yes VACUUM in a separate connection is safe. It will still have to wait for locks held by the old connection to clear before it can proceed, and while it is running it will prevent other write transactions (and eventually read transactions) from starting (they'll fail with SQLITE_BUSY).

Note it's generally advisable to have an sqlite connection used by only one thread at a time, as transactionality gets hard to reason about if you have a bunch of threads interleaving different queries. Transactions happen at the connection level not the thread level.

Also note it is possible to determine what statement(s) are currently active via the sqlite3_next_stmt() interface or the SQLITE_STMT virtual table.

10:36 Reply: Unexepected commit result with two threads application (artifact: f92837e383 user: sqweek)

Hm nope your expectations are reasonable -- that should work fine using a separate connection for each thread.

How are you handling errors during COMMIT? Do you have a busy handler or a timeout configured?

In the default rollback journal mode, once the updates have run and the change is ready to commit¹, sqlite acquires the PENDING lock to indicate it is ready to modify the main db file, but it has to wait until all readers finish and relinquish their SHARED locks before it can acquire the EXCLUSIVE lock and actually write the new data.

If the busy timeout expires before the readers finish, then you will get an SQLITE_BUSY error. So if your policy on errors is to rollback then that would explain the behaviour you're seeing. The default configuration for a connection is no timeout, ie. you will immediately get SQLITE_BUSY if any readers are active. See https://www.sqlite.org/pragma.html#pragma_busy_timeout

¹ note this can also happen before commit time if your transaction is large enough to spill sqlite's memory cache; see https://www.sqlite.org/pragma.html#pragma_cache_spill

08:45 Reply: ORDER BY not working for a specific DB/table (artifact: a7bb45fc15 user: sqweek)

the nearest representable value of 31.0 is 30.999999999999999999.

This is pretty misleading. 31.0 can be trivially represented with a binary decimal like:


ie. 2⁴ + 2³ + 2² + 2¹ + 2⁰

Which has a clear and precise encoding under IEEE754. At double precision, that is 0x403f000000000000.

Of course that is the same encoding that you would get for 30.999999999999999999, and to understand why I find it instructive to look at how neighbouring bit patterns are interpreted (decimal approximations courtesy of java.lang.Double):

0x403effffffffffff = 30.999999999999996
0x403f000000000000 = 31.0
0x403f000000000001 = 31.000000000000004

ie. each distinct floating point value can be thought of as representing a region on the line of real numbers, and any real number is indistingiushable from other numbers in the same region.

I'd wager the majority of floating point engines display 0x403f000000000000 as 31.0 so it's surprising that SQLite generates 30.999999999999999999, but it's not incorrect.

More ↓