SQLite User Forum

To obtain a sqlite3_blob object from a stmt?
Login

To obtain a sqlite3_blob object from a stmt?

(1) By Leandro Santiago (leandro) on 2022-09-03 11:15:04 [link] [source]

Right now it seems that the only way to obtain a blob for incremental I/O is via sqlite3_blob_open(), but I was wondering if it's possible to obtain one from a sqlite3_stmt, obtained when running a normal query.

Right now I'm reading the whole blob to memory (using sqlite3_column_blob()) on select for each row, which can be very inefficient for large blobs which don't need to be completely in RAM.

The use case (which is not a concrete or real one either) I have in mind is a SQLite file with a table which contains lots of gzipped blobs and my application is a web server that decompress them on the fly, serving them via http. The blobs are obtained via normal "select..." queries.

If a sqlite3_blob can be obtained during processing the select statement, the memory usage of this application can get very low, as no blob needs to be in completely memory all the time, as the decompression and writing to a socket can be done together using very little memory.

Right now, with the current API, at least the original compressed blobs need to be in completely in RAM.

I can imagine that such blob would need to be read-only, as it might not even be in a real table, which probably complicates things a little bit.

I suppose this feature would need extending the API, maybe with something like:

int sqlite3_column_incremental_blob(sqlite3_stmt*, int, sqlite3_blob**);

I suppose this would create quite some memory management complication, as such blob would probably be required to be closed before processing the next select result row.

To be very honest I haven't used the C API for many years and nowadays I use mostly the go-sqlite3 wrapper, so I might be saying lots of nonsense here :-)

Any thoughts? With no real use cases I know it makes no sense to make it a feature request, but I would like to know if you folks have any thoughts.

(2) By Stephan Beal (stephan) on 2022-09-03 11:40:50 in reply to 1 [link] [source]

The use case (which is not a concrete or real one either) I have in mind is a SQLite file with a table which contains lots of gzipped blobs and my application is a web server that decompress them on the fly, serving them via http. The blobs are obtained via normal "select..." queries.

This isn't an answer to your question, but just to point out: you don't generally need to decompress gzipped data before serving it. Any full-featured client can accept and decompress gzipped payloads on their own.

If the client sends you the Accept-Encoding encoding header which contains "gzip" then you can respond with a Content-Encoding: gzip header and send them the compressed data. Every browser supports that. The question, for your use case, is mainly whether you want to support "lesser" clients and decompress the data before sending it to them.

One of many articles on the topic:

https://en.wikipedia.org/wiki/HTTP_compression

(3) By Igor Tandetnik (itandetnik) on 2022-09-03 13:16:08 in reply to 1 [source]

What's stopping you from retrieving the rowid in your select statement, and passing that to sqlite3_blob_open?

(4) By Larry Brasfield (larrybr) on 2022-09-03 14:23:31 in reply to 1 [link] [source]

Right now, with the current API, at least the original compressed blobs need to be in completely in RAM.

Not so, as you will discover when you investigate Igor's pertinent suggestion.