SQLite Forum

Tips for getting SQLite under Python to perform better with more RAM?
Login
The data is only loaded from the actual device to the cache when it is read.

The "first read" must always be to where ever the data is stored (physical device, OS cache if it has been read before by the OS and cached in the OS cache, application cache if the application has previously read the data and stored it in the cache).

Effects of increases in memory will have no effect on the time required to fulfill the first request, but *may* reduce the time required for subsequent accesses.  (I say *may* because the differential in time between doing an I/O to a physical device vs using the same data already read varies greatly depending on the storage device.  The time required to do a physical device read may be astronomical if the underlying device is a secretary and the data you want is located on a different continent (especially if the airlines are not operating at full capacity yet).  On the other hand, if the device is a PCI x4 connected SSD, then there may be very little detectable difference).

If the connection is closed, the application page cache is discarded.  Each new connection is created with an empty page cache.

If the underlying database file is updated by a different connection, the application page cache is discarded.  (A mode=ro connection will detect the change and invalidate the cache -- an immutable=1 connection will not detect the change and will not invalidate the cache)

So yes.  If you are holding the connection open to perform multiple operations and the database is not updated (or is immutable), then having a sufficiently sized page cache will increase the access speed of pages after they have been read once.  

However, it takes time to search the page cache for the desired page, and eventually the time taken to "search the cache" will exceed the time it would take so just "do the I/O" (by this point it is generally just a transition to Supervisor mode to service the I/O request from the OS cache).  

Where exactly this sweet spot lies varies considerably depending on the particular configuration of the host computer (and also the workload). 

Generally speaking optimum performance will be obtained somewhere around a 95% cache hit rate, and "writing" or "updating" will require sufficient cache size to maintain the 95% hit rate **and** hold the working change-set (so bigger cache is required for read/write operations, smaller cache works for read-only).

I have pre-configured my version of SQLite3 to always use 1 GB of application page cache per connection on 64-bit machines (`pragma cache_size=-1048576` by setting SQLITE_DEFAULT_CACHE_SIZE to -1048576 when compiling the library).