SQLite Forum

binary .dump format
Login
With ZLib compressing at around 60MB/s in my tests, for a 60GB DB file,  
dumped to 120GB of SQL text, that's around 2,000s of compression right there...  
On top of the actual .dump itself, of course (unless concurrent if piped maybe).

The advantage of a custom binary dump, of table-pages only (see David's),  
or table-cells only (see hitchmanr's), is that you limit the IO, and don't  
need to decode the record cells (in the first case). But then you are on your  
own the re-assemble the DB though!

You'd need to investigate with [DB Stats](https://www.sqlite.org/dbstat.html) to find out how much you'd save precisely on that DB,  
to see if it's worth it time-wise, against the heavy dev-investment needed.

For a completely different take, one could also use WAL mode and [Litestream](https://litestream.io/blog/why-i-built-litestream/),  
backup'ing the WAL pieces Litestream generates. A very different approach...

PS: Richard mentioned they had activity in that area too, but unfortunately we haven't heard anything since.