SQLite Forum

BLOB Incremental I/O vs direct file system I/O
Login

BLOB Incremental I/O vs direct file system I/O

(1) By anonymous on 2021-12-15 06:13:15 [link] [source]

I want to write a fixed size circular file to store my data. Every data record has the same size and the new data will overwrite the old data as long as the it reaches the end of file.

I've an idea to use SQLite BLOB incremental I/O to implement this circular file, and create a table to record the begin and end position of this BLOB data.

My questions are :

  1. What about the performance of BLOB incremental I/O API (blob_open, blob_read, blob_write) ? Compared with direct file system I/O (fopen, fread, fseek and fwrite)

  2. How the roll-back journal work when using BLOB incremental I/O ? For example, if the power failure occurred during write to the BLOB column, what would happen ?

  3. If I put the blob write to a transaction, will all data being buffered until commit this transaction ?

  4. Because I want to have a fixed size file, using a single row to store all data in one BLOB column or using multiple rows to store each data record ? Which one has the better write performance ?

(2) By Stephan (stephancb) on 2021-12-15 13:52:59 in reply to 1 [source]

I don't have a direct answer to the questions, but if power failures are a concern, and you want to do explicit transactions, why not implement the circular buffer as a normal table, one row per record? Then everything is well defined and more portable. See for example this thread.

If write performance has the highest priority (at the expense of data integrity in case of power failures), then I would use a normal file and unbuffered write/read or buffered fwrite/fread, see here for a comparison. I.e. no database engine in between.

(3) By ddevienne on 2021-12-15 14:37:46 in reply to 1 [link] [source]

Nobody mentioned it yet, so look at this article.

See also the SO post.

You could also use an append-only file, and start a new file past a given size threshold.
Keep the last two files around, and you have bounded the disk-space usage.
Just remove older files, or compress / archive them instead too.

With some fsync calls, you could be durable too.