Memory-mapped IO used for temp files even though memory-mapped I/O is not used
While investigating out-of-memory issues, we noticed that during large sort operations, memory usage spikes. It turns out SQLite uses memory-mapped file I/O for temp files, which can be GBs in size in our case.
The mmap_size PRAGMA is never used.
It looks like this bit of code in
vdbeSorterOpenTempFile (vdbesort.c) is setting the Mmap max to ~2GB:
i64 max = SQLITE_MAX_MMAP_SIZE; sqlite3OsFileControlHint(*ppFd, SQLITE_FCNTL_MMAP_SIZE, (void*)&max);
To work around this, we set SQLITE_MAX_MMAP_SIZE=0.
Is this expected behavior?
Platform: Windows 32-bit.
Investigating this further, a potential fix is for the temp DBs to follow the main DB's MMap setting like so:
i64 max = db->szMmap;
Similar to how the cache size for temp DBs follows the main DB's cache size in
mxCache = db->aDb.pSchema->cache_size;
This limits the total temp DB memory usage to a multiple of the setting for the main DB, as the settings apply to every temp DB and we may need multiple ones for a query.
Current thinking is that memory mapping isn't all that much of an advantage when sorting data. Not on modern Linux (and presumably other modern OS versions as well) anyway. So it will be turned off by default for 3.37.0: