SQLite Forum

Tips for getting SQLite under Python to perform better with more RAM?
Login
As others mentioned you can use PRAGMA mmap but that will only cover the first 2GB of the database (a 32 bit limitation even on 64 bit platforms imposed by SQLite itself).

The Python GIL is released while SQLite code is running, but SQLite itself then has a per database handle mutex so you won't get concurrency on the SQLite side unless you open the database multiple times.  Consequently you'll get best performance if you open the database separately in each thread.  (Note: the sqlite3 module does not support cross thread usage of the same db handle.)

The number of different query strings you have will also matter.  Under the hood the text query strings get turned into C level sqlite3_stmt which is what does the actual query.  The cached_statements parameter to connect controls a cache of text strings to sqlite3_stmt, defaulting to 100.  If you have more than 100 then things will go slower (the text to sqlite3_stmt conversion is quite slow).  

Beyond this, performance will depend on the usual profiling, your schema, indices etc.  

I did implement a tracer in APSW that runs against your unmodified program and can show all queries summarised by popularity, execution time etc - [https://rogerbinns.github.io/apsw/execution.html#apsw-trace](https://rogerbinns.github.io/apsw/execution.html#apsw-trace)