I'm making a (single-process, WAL mode) versioned database; roughly speaking each row of each table gets an extra "revision#" sub-key column, and data is basically always added, never modified or deleted. Given that I'm already paying for that overhead, it seems a shame (conceptually anyway) not to enable dirty reads since I only read rows that are never updated or deleted. If I understand correctly, a separate benefit might be keeping reads from competing with checkpointing. After reading the docs and many posts, I *think* that to get dirty reads I need to use pragma read-uncommitted and `sqlite3_enable_shared_cache()`, producing a process-wide shared cache. Presumably to overlap writes with dirty reads I then need at least two threads each with their own connection: a writer thread and a (dirty) reader thread. Have I understood the architecture more or less correctly? Pretty sure I understand that SQLite serializes writes, but I'm less clear how granular the multithreading is in SQLite internals vis a vis reads. Seems like if [this article](https://blog.expensify.com/2018/01/08/scaling-sqlite-to-4m-qps-on-a-single-server/) is right, adding multiple dirty reader threads should scale reasonably with the usual caveats. OTOH, is it plausible that dirty reads are not worth the effort and will produce little in the way of increased maximum throughput?