Is inserting multiple rows with one command better than inserting them each with one command?
(1) By 6kEs4Majrd on 2022-03-02 01:02:56 [link] [source]
https://stackoverflow.com/questions/1609637/is-it-possible-to-insert-multiple-rows-at-a-time-in-an-sqlite-database
I see that I can insert multiple entries with one command above.
What is the difference between this method and inserting the entries each entry with one command? Is the former supposedly faster than the latter? Or they are practically the same in terms of the performance (As maybe the first method is internally broken into multiple one-entry command?)?
Note, when I add each entry with one command, I mean, I first run,
BEGIN IMMEDIATE;
Then run all the insert commands (one command for one entry), then run
COMMIT;
(2) By Larry Brasfield (larrybr) on 2022-03-02 01:17:43 in reply to 1 [link] [source]
There will be a slight execution time savings when using multiple VALUES clauses.
However, if you care about speed, you are better off compiling an INSERT statement once, then binding values to its named (or positional) parameters and rerunning it for each row. This eliminates the parsing and virtual machine setup repetition.
(3) By SeverKetor on 2022-03-02 01:29:54 in reply to 1 [link] [source]
I did some testing a while back and found the best way was to insert dozens of rows per insert, and re-use the statement multiple times to do the entire insert.
I'm not sure how much it matters, but I did this in Python. Your mileage may very
(4) By ddevienne on 2022-03-02 11:26:14 in reply to 1 [source]
Yes. The gains are not huge, but nothing to sneeze at.
Was using a small row, with 16-byte blobs, and 87-char strings.
No PK or indexes, so not 100% real-world. SSD disks (not in-memory).
FWIW. --DD
CREATE TABLE ut_ingest (id int NOT NULL, guid blob, name text)
Win64 RELEASE:
Ingesting 100,000 rows, totalling 10.2 MB (10,700,000 bytes)
SQLite w/o Prep fmt.... in 0.410s ( 244,109 row/s 24.9 MB/s) 48.1x 0.4x
SQLite w/o Prep snp.... in 0.440s ( 227,384 row/s 23.2 MB/s) 44.8x 0.3x
SQLite w/ Prep x1..... in 0.147s ( 678,899 row/s 69.3 MB/s) 133.8x = 1.0x
SQLite w/ Prep x10.... in 0.126s ( 796,134 row/s 81.2 MB/s) 156.9x 1.2x
SQLite w/ Prep x100... in 0.131s ( 763,677 row/s 77.9 MB/s) 150.5x 1.1x
SQLite w/ Prep x1000.. in 0.141s ( 710,142 row/s 72.5 MB/s) 139.9x 1.0x
SQLite w/ Prep x10000. in 0.157s ( 635,271 row/s 64.8 MB/s) 125.2x 0.9x
Linux RELEASE (5 years older computer):
Ingesting 100,000 rows, totalling 10.2 MB (10,700,000 bytes)
SQLite w/o Prep fmt.... in 0.480s ( 208,274 row/s 21.3 MB/s) 41.5x 0.5x
SQLite w/o Prep snp.... in 0.540s ( 185,224 row/s 18.9 MB/s) 36.9x 0.5x
SQLite w/ Prep x1..... in 0.258s ( 387,933 row/s 39.6 MB/s) 77.2x = 1.0x
SQLite w/ Prep x10.... in 0.234s ( 427,477 row/s 43.6 MB/s) 85.1x 1.1x
SQLite w/ Prep x100... in 0.216s ( 463,239 row/s 47.3 MB/s) 92.2x 1.2x
SQLite w/ Prep x1000.. in 0.258s ( 387,841 row/s 39.6 MB/s) 77.2x 1.0x
SQLite w/ Prep x10000. in 0.264s ( 378,930 row/s 38.7 MB/s) 75.4x 1.0x
Where:
fmt = fmt::format
snp = sqlite3_snprintf (writing to stack buffer)
(Ignore the before-last column; that's a comparison with a different system, but used badly, thus its poor showing)