allowing reading by another process while inserting many entries?
(1) By 6kEs4Majrd on 2021-07-09 16:16:21 [source]
I see the following example on how to efficiently insert many entries.
BEGIN TRANSACTION; INSERT INTO 'tablename' table VALUES ('data1', 'data2'); INSERT INTO 'tablename' table VALUES ('data3', 'data4'); ... COMMIT;
I recall that when many entries are being inserted the db is blocked from reading in other processes. But my memory may be wrong. Could anybody confirm if this is case?
If so, I don't want other reading processes to be blocked for a long time. Is it better to split many INSERT INTO statements into multiple transactions instead of just using one transition for all the INSERT INTO statements. Thanks.
If the database is in WAL mode then other processes can continue reading. But the won't see any of the INSERTs until you COMMIT.
In rollback-journal mode, other processes will still be able to read from the db while the INSERT statements are progressing. There is however a period during COMMIT where other processes are blocked from reading (their attempts will fail with error code SQLITE_BUSY). They are only blocked for a short time, just long enough for the INSERTing process to write and fsync the database pages which have been changed.
At least, this is how it works for transactions that fit in SQLite's page cache (which is held in memory). For larger transactions, processes will be blocked from reading the db from the point that the cache spills until COMMIT finishes, which can be a much longer period than outlined above. The cache behaviour can be tweaked with the cache_size and cache_spill PRAGMAs.
The other thing which can cause processes to be blocked from reading for an extended period of time is if there is a long-running read transaction happening at the time a writer wants to COMMIT. The writer has to wait for the reader to finish before it can update the database, and new readers are not allowed to start in the meantime (to avoid writer starvation).