SQLite User Forum

Bug Report: Reading long lines results in segfault
Login

Bug Report: Reading long lines results in segfault

(1) By Michael Meier (distributed) on 2022-10-10 14:33:42 [link] [source]

I have an SQL dump, produced with the sqlite3 executable, that, when .read back, causes sqlite3 to segfault. I have observed this behavior with both Sqlite 3.22.0 and 3.39.4, the newest released version at the time of writing.

The dump file contains largish blobs of up to about 750 MiB size. In turn this means that individual lines of the dump file can get pretty large, up to about 1.5 GiB. Note that lines of this length violates the documented "Maximum Length Of An SQL Statement" limit, even though the line was generated by sqlite3 itself. No error is printed relating to overrunning the maximum line length.

Below I present some debugger and my preliminary analysis of the problem. I am still unsure about the exact mechanisms of the failure however. I continue by saying what I think is the bug and on how to reproduce.

What I can see in gdb

When I run a debug symbol enabled version of sqlite3 3.39.4 on the dump file, gdb shows the segfault happens in memcpy called process_input. I have printed the most important variables for your reference.

Program received signal SIGSEGV, Segmentation fault.
__memmove_avx_unaligned_erms ()
    at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:533
533	../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory.
(gdb) bt
#0  __memmove_avx_unaligned_erms ()
    at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:533
#1  0x0000000000434801 in process_input (p=0x7fffffffcaa0) at shell.c:22772
#2  0x000000000042fc72 in do_meta_command (zLine=0x7fffffffe21d ".read", 
    p=0x7fffffffcaa0) at shell.c:21083
#3  0x00000000004363b4 in main (argc=3, argv=0x7fffffffde98) at shell.c:23569
(gdb) f 1
#1  0x0000000000434801 in process_input (p=0x7fffffffcaa0) at shell.c:22772
22772	      memcpy(zSql+nSql, zLine, nLine+1);
(gdb) p zSql 
$3 = 0x7fff60542010 "INSERT INTO logs VALUES('5258d988ad8558ae','f72ae174b34e3021',3000117,'", 'x' <repeats 33 times>, "',X'02020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202"...
(gdb) p nSql 
$4 = 2132575663
(gdb) p zLine
$5 = 0x7fff3f330010 "INSERT INTO logs VALUES('4d42e9ffecca4a53','581e10cd609aabf9',3000117,'", 'x' <repeats 33 times>, "',X'02020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202"...
(gdb) p nLine
$6 = 17809521
(gdb) p nAlloc
$7 = 2138682174
(gdb) p p->lineno 
$8 = 7323
(gdb) p startline 
$9 = 6215

Some observations

There is one line that is longer 1 GiB, line 6215. The program seems to be accumulating data from there up to the current line, 7323, at the time of failure. This is counterintuitive as in this part of the SQL dump file, all statements are only one line long. I think this happens because of the ~1.5 GiB line (6215), only 1 GiB-1 byte is copied into memory because of the use of strlen30 at shell.c:22756. From then on, the lines are accumulated into a buffer to terminate the current SQL statement, which doesn't happen.

I see that nSql+(nLine+1) > nAlloc. If nAlloc is the size of zSql, it is clear that the memcpy is overrunning the destination buffer zSql. Also, nSql+(nLine+1) is just crossing a 2 GiB boundary.

What's the bug?

To me, the bug is that sqlite3 segfaults in lieu of doing something else. If an input line and/or statement is too large, I expect an error message instead.

There a secondary, possible bug to be discussed: That the sqlite3 binary generates dump it cannot read. This happens because a blob within the allowed limits is dumped into hex format leading to a line longer than what Sqlite will accept. Maybe it's hard to set a limit here as dumps can be generated with some version and read back with another, each version with its own limits. Maybe there are other reasons it is not solved this way. Here, the authors' opinion would be much appreciated.

Reproduction

I can provide the dump file that causes this behavior. The bzip2 compressed version is 836 KiB and expands to 7.6 GiB. How would I share such a file?

Warning: A release build of Sqlite takes about 20 minutes until it crashes on my computer. The debug build running in gdb took about 50 minutes.

I have experimented with generating dump files that exhibit a certain structure leading sqlite3 to segfault. Unfortunately I have only had success with this for 3.22.0, which segfaults, but not with 3.39.4, which returns "Error: out of memory". I can provide the generator as well, if this helps.

In any case, the dump I have will reproduce the issue on 3.39.4.

(2) By ddevienne on 2022-10-11 07:01:06 in reply to 1 [link] [source]

Funny how the stars align. I was thinking exactly the same thing.

I'm migrating GBs of SQLite DBs to PostgreSQL, and sometimes they don't pass integrity checks. Often for innocuous extra-pages a vacuum fixes, but other times requiring a dump to SQL text. And those DBs have large blobs too, and I was asking myself the very question you just asked.

I've long thought (and argued on this list) that SQLite needs better large-blob support. Perhaps like PostgreSQL's TOAST support. We're all re-inventing TOAST to avoid large blobs in SQLite, to work-around its limitations, basically. That's too bad IMHO. Richard can do TOAST within much better we can on-top. The notion of auxiliary tables does already exist. It's definitely a large change, but one that would finally plug a weak spot in the otherwise excellent feature set. My $0.02.

(3) By Donal Fellows (dkfellows) on 2022-10-11 10:33:50 in reply to 2 [source]

I've had to do something like that too to avoid doing BLOB concatenation (as that has really terrible performance right now). The result tends to end up in 10-50 MB chunks (for application-specific reasons) and works well as I'm able to write each chunk once; reassembling the resulting multi-gigabyte binary string at the end is usually easy in the host language (or it is large enough that special measures are needed anyway, which is cool too).

(4) By ddevienne on 2022-10-11 12:10:19 in reply to 3 [link] [source]

SQLite the API supports incremental Blob IO.

But there's no SQL-level equivalent syntax, to do such incremental IO, which the .dump and restore could leverage.

Obviously that syntax wouldn't be standard, but would avoid extra long lines, and the max-SQL-text line limit.

Perhaps a syntax similar to multi-line literals in some languages, like a triple-single-quotes: x''' <many-lines-in-hex-or-base64-of-fixed-length>'''.
Of course, the lines are smaller, but must still reach the end-of-the-statement to process all values, except the x'''...''' ones.
Those would have been measured for their lengths, skipped over, replaced by a zero_blob(N) in the statement, then incremental IO done post-sqlite3_step(),
either rewinding and re-reading the SQL file(s), for those large-blob(s) (or re-reading from tmp file(s) accumulated at SQL-parsing time, for pipes, if not random-access).

Of course, that works only for simple statements, where it's obvious which row(s) was/were affected by the statement. Use of the x'''...''' notation/syntax would be an error otherwise.

I'm just brainstorming here :). I have no expectations of the above ever materializing. It's probably nonsensical anyway!

The above was at the SQLite SQL level. Another idea would be to deal with it at CLI level, with dot-commands.
There's already a readfile() function, but maybe we could expose incremental blob-IO has a dot-command,
and combined with a shell-like HERE-string for the content (instead of a file-name).
The args of that dot-command would be similar to sqlite3_blob_open(), with the content added (filename or HERE-string).

The advantage of a dot-command, is that it can be prototyped outside the official CLI, w/o changing SQLite.
I actually suspect someone has already done something like this somewhere.