Hi all, I have a table like this: ``` CREATE TABLE data (a INTEGER NOT NULL, b INTEGER NOT NULL, c INTEGER, d INTEGER, PRIMARY KEY (a, b)) WITHOUT ROWID; ``` I want to fill this table as fast as possible with ~20 million rows, so I do some of the usual stuff (one big transaction, prepared statements). Inserting in order also helps a bit. This takes ~19-23 seconds on my machine. However, this is still roughly 3-4x slower than inserting in a table without primary key and with rowid, which takes ~5-6 seconds. In both cases SQLite is bottlenecked on CPU, not on disk. I tried a couple of things already, but they didn't help much: 1. Using an in-memory database. Helps a bit, but still slower than without the primary key on-disk. 2. Not using a journal. Didn't do a lot, presumably because the table was empty initially. 3. Inserting all data in an identical in-memory temporary "data2" table, and using `INSERT INTO SELECT FROM`. Ideally, SQLite should see that the data is already sorted and unique and skip some of the work. Intended to simulate what would happen if I'd create a virtual table. Roughly as fast as inserting directly into the "data" table. 4. Create and fill a normal table and create an index afterwards. Not ideal, because it creates a larger database and is slower when reading. And it turns out that it's slower than the original approach. Are there approaches I'm missing? Things I can somehow tell to SQLite to make this faster? Or is there a way to improve SQLite for this usecase? Ideally approach 3 (with a virtual table) can skip a lot of the insertion work.