Your transaction size seems reasonable. you might experiment with 2000 rows and 50000 rows just to see if either is an improvement. The fact that some values are TEXT and some are INTEGER doesn't matter. SQLite doesn't use fixed-sized data for either of them. But it was definitely good to mention it in your question. SQLite stores all the values for a row together, in the order in which the columns are defined. So it has to construct a sequence of values for each row. I believe that it's fastest to construct the sequence in column order, which should mean that it's fastest to supply the values in column order in the INSERT command. However, it's possible that SQLite uses a linked list, in which case it might be a little faster to supply the values in reverse column order. If you're getting obsessive about speed, you might try this out. But by and large SQLite execution time is almost always storage-bound. SQLite's processing is extremely optimised and very fast. But most of the time taken by INSERT is taken waiting for your storage subsystem to process reads and writes. Worry less about syntax and more about minimising the number of calls to storage. One possibility, if this million-row table is useless unless complete, is to set one or more PRAGMAs to ignore ACID and change security while the table is being constructed. Then you can set them back afterwards. These two PRAGMAs might change things (and they might not, depending on your setup !) <https://sqlite.org/pragma.html#pragma_synchronous> <https://sqlite.org/pragma.html#pragma_wal_autocheckpoint> For each PRAGMA, find out what the setting is before changing it, and change it back when you've done your million rows. Test that out, and find out if it speeds things up. If it doesn't, don't do it.