SQLite Forum

DB read fails while doing parallel bulk DB updates
This is good to hear.  Each of these has a few caveats and there are some relevant assumptions that have to do with the environment in which the application is running and the particular resources it has available.

Firstly, I would recommend leaving SQLITE\_CONFIG\_SERIALIZED as the default unless you find that it is causing a problem -- and that would likely manifest as say "high CPU usage" or excessive mutex blocking.  There is no point in going to all the trouble of trying to use a different serialization mode if it will not provide a benefit.

Secondly, WAL mode provides a trade-off between concurrency and I/O.  In particular it *increases* concurrency (reading while writing) at the cost of *increasing* the amount of I/O (more or less double the I/O in order to provide the concurrency).  In effect this means that your I/O will saturate in half the time it would otherwise, or that if it is already saturated, then it will become "half-as-fast" at it was before.  Depending on where your database is stored this could make a big difference -- for example if it is stored on a slow USB NAND device then the effect will be quite noticeable, if the database is stored on NVMe SSD with 4.2 Gbit throughput, not so much.  This is because the concurrency is providing by using a "write-aside" log so as not to interrupt reading, however, you will periodically have to "checkpoint" this write-aside buffer back into the actual database file which is an *increase* in total I/O, not a reduction.  The theory is that this "checkpointing" will occur more quickly resulting in a net gain in throughput, but still has a cost of increased I/O *as long as the I/O channel is not full*.

Thirdly, when NOT using WAL, read operations will busy-wait while database writes are occurring -- that does not mean during the BEGIN/UPDATE/INSERT (assuming that the pragma cache\_size is big enough to hold all the changed pages) -- but when the transaction is being actually "COMMIT"ed to the database.  Similarly the "COMMIT" will busy-wait until the READ has completed.  Therefore, transactions in which you *know* you will be updating the database should be started with "BEGIN IMMEDIATE" rather than a plain BEGIN, so that the database knows that this transaction *is intending* to update the database.

Fourthly, the busy handler itself is a polling loop, not a wait.  That is, when the "reader" tries to get access to the database it may be denied because the something is "writing" to it (a COMMIT for a not-WAL database, or a checkpoint of a WAL database).  When this occurs the busy-handler does not block waiting to acquire the READ lock, rather it enters a polling loop where it sleeps for a time, then checks again for the lock, lather/rinse/repeat until the lock can be acquired.  Normally this works OK, but sometimes the particular environment does not support usleep (or does not report properly that it supports usleep) and so the sleep occurs in increments of seconds rather than milliseconds.  (This is rare but if your instrumentation is indicating that the sleeping/retry is always occurring for an integral number of seconds, you may be experiencing this).

On *some* Operating Systems, the default busy-handler can implement "blocking" rather that "polling" through file_control calls.  This can be enabled by compiling the sqlite3.c code with the SQLITE\_ENABLE\_SETLK\_TIMEOUT preprocessor symbol defined.  This attempts to change the busyhandler loop to use "block waiting to acquire the lock" loops rather than "poll and sleep" loops.  It may or may not work on your OS, but it may solve the issue by passing the blocking to the OS scheduler rather than using a polling loop. (In which case a call to yield the current timeslice to the scheduler will work as you would expect because the blocked thread will immediately become unblocked and dispatchable).

This raises another question.  You say that the "reading" operation is triggered by an incoming hardware event, though you do not say how long the response process lasts, only that it has response requirements.

Have you considered whether it would be possible to run this as a single database loop like follows:

while True
  begin immediate
  do updates/inserts/whatever batch
  while events > 0:
    process event

and have the HW event handler merely increment the events counter?  If the time taken to "process event" plus the time taken to process the "begin...commit" block were less that a second, you would have met your response requirement and there would not be a possibility of a processing of events taking too long (and you could not have database contention).