SQLite User Forum

Write large blob, > 2GB
Login

Write large blob, > 2GB

(1) By anonymous on 2020-08-21 13:00:06 [link] [source]

There are 4 APIs that allow writing blobs in excess of 2 GB,

  • sqlite3_bind_blob64 from https://www.sqlite.org/c3ref/bind_blob.html
  • sqlite3_bind_zeroblob64 from https://www.sqlite.org/c3ref/bind_blob.html
  • sqlite3_result_blob64 from https://www.sqlite.org/c3ref/result_blob.html
  • sqlite3_result_zeroblob64 from https://www.sqlite.org/c3ref/result_blob.html

Yet Incremental Blob IO is still limited to 2GB offsets.

That the count be limited to 2GB (signed int), that's OK. But
that offsets cannot be >2GB, that's a real issue; arguably where
incrementally writing a blob is the most useful. See

  • https://www.sqlite.org/c3ref/blob_read.html
  • https://www.sqlite.org/c3ref/blob_write.html

Is there a work-around?

Any chance sqlite3_blob_write64 to be added soon?
Same for read64 of course.

PS: In this case, I have a 3.5GB blob

(2.1) By Larry Brasfield (LarryBrasfield) on 2020-08-21 15:53:57 edited from 2.0 in reply to 1 [link] [source]

Although blobs (and strings) are presently limited to length 2^31-1, (per Limits in SQLite, at section 1), those ..blob64 APIs evince a clear intention to expand that limit. (Otherwise, they would be useless.) There have been recent changes allowing increased DB size as well.

Hence, to your question,

Any chance sqlite3_blob_write64 to be added soon?

, the answer seems to obviously be "yes".

As to whether there is a work-around, the above-referenced limits say "no".

You may have noticed that the ..blob64 APIs were not documented except for signature. Perhaps a note in the function description text should state that the 2^31-1 length limit still applies. (Or, if not, the limit needs restating.)

(3) By anonymous on 2020-08-24 06:21:20 in reply to 2.1 [link] [source]

If they would increase the limit of the blob size, then there will also need added a sqlite3_limit64 function, too.

(4) By Simon Slavin (slavin) on 2020-08-24 14:30:33 in reply to 1 [link] [source]

Out of interest, given that you say you have a 3.5GB blob,

  1. How big would you expect an entire database to be ? In other words, how big might the total size of all blobs in the same database file be ?
  2. Do you expect to store more than one blob in the same row ?
  3. What's your factor of write-to-read on these blobs ? In other words, does the majority of data get stored but go unread ? Or do you expect each blob get read (i.e. named in a SELECT) many times ?

The answers to these questions may help us recommend API calls or techniques which best suit you.

(5) By anonymous on 2020-08-24 14:36:56 in reply to 2.1 [link] [source]

(anon OP here, BTW...)

intention to expand that limit. (Otherwise, they would be useless.)

Indeed, this is what surprises me, in hindsight.

What's the point of these 64-bit APIs, if in the end they end up
mostly useless, given the 2GB blob/text columns hard limitation?

(6) By anonymous on 2020-08-24 14:48:48 in reply to 4 [link] [source]

Regarding #1, each DB is from a few MBs, to many GBs total, mostly blobs, size wise.
And there can be many such DBs.

Regarding #2, only 1 blob per row. I'm aware of existing best practices
to keep large blobs in separate tables, and/or at the end of the row.

Regarding #3, these blobs are data, either sensor inputs, or computed outputs,
so it's often written only once, and definitely read later, one or more times.

Given the hard 2GB limit(ation), I must split large blobs into several rows,
no choice. I probably also shouldn't have 2GB blobs, and find the appropriate
threshold to split the blobs. 128MB? More? Less? TBD I guess...

I know SQLite isn't exactly ideal for very large blobs, but I definitely
want to remain "transactional" with other structured data in the same DB
(the blob owners in other table(s)).

(7) By Richard Hipp (drh) on 2020-08-24 15:19:36 in reply to 6 [source]

Given the hard 2GB limit(ation), I must split large blobs into several rows, no choice. I probably also shouldn't have 2GB blobs, and find the appropriate threshold to split the blobs. 128MB? More? Less?

The 2GB blob size limit is probably not going to go away any time soon. You will definitely need to start splitting blobs.

As currently implemented, SQLite constructs an entire row in memory whenever it needs to read or write the last column in the row. So if you have a row that contains an N-byte blob, you'll need at least N bytes of memory (probably a bit more) in order to process that row. For your application, it might not be a problem to allocate multiple gigabytes of memory for this purpose. Yet, we have received reports from customers of applications that create 100MB rows, which later cause problems for other memory-constraint (background or daemon) processes that are then unable to do things like run "VACUUM INTO" for that database for backup purposes, because the daemon process is limited to 10MB of memory, and so it is unable to load the row with the 100MB blob. I don't know what your system is like, but for maximum compatibility, you might do well to limit the size of each blob piece to a few megabytes.

(8) By anonymous on 2020-08-24 15:32:38 in reply to 7 [link] [source]

Thanks for the input Richard. I'll keep that in mind. I wasn't aware
that the entire blob needed to be read for the row to be processed,
especially if the query does not explicitly request that blob (which
is the last column of course). But then what's the point of making blobs last then?

A few MBs, e.g. 4MB, for a 3.5GB blob, that's close to a 1,000 rows.
Isn't that going to be slowing things down? It's a trade off I guess.

Also, I'd have naively expected VACUUM to work at the page level only,
no need for decoding the rows. I forgot about overflow page chains that
need updating on page movement, to record new page numbers.

(9) By anonymous on 2020-08-24 15:36:30 in reply to 7 [link] [source]

The 2GB blob size limit is probably not going to go away any time soon

But then, why the xyz64 APIs? As Larry wrote, aren't they useless then?
Playing devil's advocate here, I don't mean to be offensive. I'd genuinely
like to know why they exist, given the long-term 2GB limit.

(10) By Richard Hipp (drh) on 2020-08-24 15:41:17 in reply to 8 [link] [source]

It doesn't need to read the whole row if it is only use some prefix of the row. But if you need the last column, then it needs to read the whole row. And a background process that is doing "VACUUM INTO" needs to read the whole row. Also, an UPDATE or INSERT needs to have the whole row in memory as well.

If the K-th column is the right-most column in a row that you are reading, then SQLite will only read in as much as needed to cover the first K columns. So if you have a bunch of small columns up front followed by a big blob, and you read from the row but do not read the big blob, then the big blob is not loaded into memory. But if you have one small column after the big blob and you need to read that one small column, then SQLite will probably read in the big blob too. (There are some optimizations whereby SQLite can sometimes bypass reading the big blob, but they do not always apply so you shouldn't count on them.)

(11) By Richard Hipp (drh) on 2020-08-24 15:42:35 in reply to 9 [link] [source]

The 64-bit APIs were added to help applications avoid 32-bit integer overflow errors.

(12) By Larry Brasfield (LarryBrasfield) on 2020-08-25 00:18:46 in reply to 11 [link] [source]

I take it, then, that the envisioned easier approach is that clients do the potentially overflowing arithmetic (were 32-bit integers to be used) with 64-bit integers, which are then passed to the sqlite3_something64() API where the possibly over-big values are checked against SQLITE_MAX_LENGTH or its lowered runtime value, whereupon the call fails for bigger values and might succeed otherwise.

I supposed, if that check is done in many places within an application, there could be some saving relative to just doing the same check in application code. However, it seems that checking for a SQLITE_TOOBIG error and comparing a derived value against the runtime value of SQLITE_MAX_LENGTH (sqlite3_limit() return) are similarly simple.

Those sqlite3_bind_{blob,text,zeroblob}64(...) APIs are suggestive of a new capability (as the OP incorrectly surmised) in part because they are not otherwise mentioned at the binding API doc.