SQLite Forum

Write large blob, > 2GB
Login
> Given the hard 2GB limit(ation), I must split large blobs into several rows,
no choice. I probably also shouldn't have 2GB blobs, and find the appropriate
threshold to split the blobs. 128MB? More? Less?

The 2GB blob size limit is probably not going to go away any time soon.
You will definitely need to start splitting blobs.

As currently implemented, SQLite constructs an entire row in memory 
whenever it needs to read or write the last
column in the row.  So if you have a row that
contains an N-byte blob, you'll need at least N bytes of memory (probably
a bit more) in order to process that row.  For your application, it might
not be a problem to allocate multiple gigabytes of memory for this purpose.
Yet, we have received reports from customers of applications that create
100MB rows, which later cause problems for other memory-constraint (background
or daemon) processes that are then unable to do things like run 
"VACUUM INTO" for that database for backup purposes, because the daemon
process is limited to 10MB of memory, and so it is unable to load the
row with the 100MB blob.  I don't know what your system is like, but
for maximum compatibility, you might do well to limit the size of each
blob piece to a few megabytes.