SQLite Forum

can I insert an 8GB file into sqlite?
Login
> copying the database just means copying a single file, and insuring that the checksums match.

This is fine in principle, but when the size of the DB gets so big that it takes days and days to transfer, what happens when there's an interruption?

Even something like [`zfs send`](https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html) doesn't copy the entire filesystem in a single copy: it collects the changed blocks and sends only those blocks. And [it's now resumable](https://zedfs.com/resuming-zfs-send/). How do you get the same features with SQLite?

> What I need is fast IO and fast http transfers

Write speed isn't going to differ greatly between the cases, and as for reading, I doubt you're going to find anything faster than [`sendfile(2)`](https://linux.die.net/man/2/sendfile), which doesn't work for files buried inside a DB.

You'll notice that the "faster than FS" article entirely ignores this aspect of things.

> I feel like if theres a concern around indexing millions of files in a database, then that problem is exacerbated by storing each file in 1MiB chunks in the database.

Yes, TANSTAAFL.

> the number of files stored to be somewhere in the hundreds of thousands, and the total storage size to be hundreds of gigabytes.

That gives an average file size of 1 MB, which feels like a bait-and-switch relative to the original thread title.

If all we're talking about are a few rare *exceptions* that break the 1 GiB barrier, then why worry about chunking the files at all? It's only when the vast majority of files are larger than that (e.g. a digital video storage archive) that you have to worry about the cost of chunking, the 1 GiB limit, etc.