SQLite Forum

Question about persistent connection & opened database file
Login

Question about persistent connection & opened database file

(1) By azazabc123 on 2021-01-22 04:21:15 [link] [source]

This very old official document said:

Test 1: 1000 INSERTs

CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));
INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');
INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');
... 995 lines omitted
INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');
INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');
INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');
  • PostgreSQL: 4.373
  • MySQL: 0.114
  • SQLite 2.7.6: 13.061
  • SQLite 2.7.6 (nosync): 0.223

Because it does not have a central server to coordinate access, SQLite must close and reopen the database file, and thus invalidate its cache, for each transaction. In this test, each SQL statement is a separate transaction so the database file must be opened and closed and the cache must be flushed 1000 times.

But I don't sure if this concept is outdated or still valid in latest SQLite3 (3.34.1), so ask here.

If I open a DB file after my application, keep this connection persistently and don't close the file until the whole application quit, SQLite3 still must close and reopen the database file at each INSERT?

I didn't find the clarification about this in Q&A nor C/C++ API reference (or maybe I missed it... sorry).

(2) By Keith Medcalf (kmedcalf) on 2021-01-22 12:06:10 in reply to 1 [source]

While the database file will not be opened and closed 1001 times, the following will occur 1001 times

  • open the directory that contains the database
  • create a new file for the log (updating the directory)
  • allocate disk space to hold the rollback data
  • write the rollback data to the log file
  • flush the log file data to the disk
  • flush the directory update to disk
  • update the database file with the new data
  • flush the update to the database file to disk
  • deallocate the filespace allocated to the log file
  • open the directory that contains the database
  • remove the new directory entry for the log file
  • flush the updates to the directory to disk

because you are executing 1001 transactions. If the disk is "spinning rust" and the computer hardware is not lying to you, this process will take somewhere between 15 and 30 seconds. If the database is on a floppy disk it will take about 60 times longer, and if the database is on a solid-state device then it will take place a lot quicker (because you will not have to wait for rotational delays and head-settling time).

See https://sqlite.org/lockingv3.html and https://sqlite.org/atomiccommit.html for a more detailed discussion.