Correct way of carrying connections over forked processes
(1) By PaulK (paulclinger) on 2023-03-30 00:53:12 [link] [source]
I'm closely familiar with https://sqlite.org/howtocorrupt.html and the recommendations to NOT carry open connection across forked processes. I'm working with a forking application that is using SQLite and want to make sure that all the handling is done correctly, but I keep running into issues that I can't resolve. The code is in Lua (using LuaSQLite3 module) running under redbean; if you're interested, you should be able to run the code by downloading redbean (https://redbean.dev/) and then running it as "./redbean.com -i forktest.lua".
The script is below; here is the description of the issues I run into:
Everything works correctly unless I uncomment line 22 with db:close() call. I read the recommendation to close connections before forking, but when I do this, I start getting "recovered 10 frames from WAL file ...forktest.db-wal" messages in error log from line 39 (where DB is re-opened after fork) and then occasionally the dreaded "locking protocol in ..." from the forked process on line 28 where the DB is re-opened. This never happens if he DB is not closed or if it's re-opened in the main process right after closing (which I guess has the same effect).
db:close() call closes all open statements and then closes the DB itself using sqlite3_close_v2 call. Everything looks to be correct and yet I still run into issues when I try to follow the recommendation in the FAQ and I can't quite figure out why.
What if instead of closing the connection, I just make sure to never re-use the same connection from the forked process (as I can check if the current process is the same process that opened the connection in the first place). Is that sufficient (as it seems to work for me) or am I still running afoul of the recommendation?
local sqlite3 = require"lsqlite3"
sqlite3.config(sqlite3.CONFIG_LOG, function(ud,code,msg)
if code ~= 14 then print("error log", unix.getpid(), code, msg) end
end)
local name = "forktest.db"
unix.unlink(name)
local function openDB(name)
local sql = [[
PRAGMA journal_mode=WAL;
create table if not exists foo(k primary key, a, b integer);
]]
local db, code, msg = sqlite3.open(name)
if not db then error(("%s (code: %d)"):format(msg, code)) end
-- prevent explicit closing, as it may close from a forked process
if debug.getmetatable(db) then debug.getmetatable(db).__gc = nil end
db:busy_timeout(1000)
assert(0 == db:exec(sql))
return db
end
local db = openDB(name)
assert(0 == db:exec("insert into foo (k,a,b) values (1,1,1)"))
-- assert(0 == db:close()) -- line 22 --
-- call openDB here --
local pid = unix.fork()
if pid == 0 then -- forked process
print(unix.getpid(), "forked from", unix.getppid())
local db = openDB(name) -- line 28 --
print(unix.getpid(), "opened")
for i = 1, 10 do
assert(0 == db:exec(("update foo set a = %d, b = b+1 where k = 1"):format(i)))
print(unix.getpid(), "updated", i)
end
assert(0 == db:close())
print(unix.getpid(), "closed")
unix.exit(0)
end
db = openDB(name) -- line 39 --
for i = 11, 20 do
assert(0 == db:exec(("update foo set a = %d, b = b+1 where k = 1"):format(i)))
end
unix.wait(pid) -- wait for the forked process to finish
for i = 21, 30 do
assert(0 == db:exec(("update foo set a = %d, b = b+1 where k = 1"):format(i)))
end
-- show the results
db:exec("select * from foo where k = 1",function(ud,cols,values,names)
for i=1,cols do print('',names[i],values[i]) end
return 0
end)
(2) By Larry Brasfield (larrybr) on 2023-03-30 02:46:29 in reply to 1 [source]
If you asked how to correctly cross the Pacific Ocean on a paddle-board, I would just say there is no correct way.
I think the same answer applies here.
The fork() call has long been understood to be a blunt instrument. You may find that the "clone" system call1 is enough finer grained to let you solve your (as-yet unstated) real problem, provided that you just leak the memory resources associated with a duplicated live sqlite3 object.
Your difficulty makes me wonder if fork() is the right tool. Maybe there is way to achieve your true objective without fork() or without open SQLite connections during a fork(). You use of Lua suggests that the CPU cycles needed to reopen connections may not be a good reason for your adventure.
I doubt you will get much advice from the people who know SQLite best for doing what they recommend not be done.
- ^ The clone(...) system call is available on Linux systems. There may or may not be equivalents on other *Nix-like systems. This likely means that clone() is not nicely wrapped as a Lua function.
(6) By PaulK (paulclinger) on 2023-03-30 17:19:23 in reply to 2 [link] [source]
I wouldn't be happy about this situation; if SQLite doesn't have much of a problem with connections opened from two different processes, it may be able to deal with one of the processes being the parent of the other one. All I'm trying to do is to figure out how to exactly follow the recommendation and if there are any further details that may be worth incorporating to make it work correctly.
"clone" is not an option for my case for a variety of reasons.
(10) By Larry Brasfield (larrybr) on 2023-03-30 18:08:10 in reply to 6 [link] [source]
if SQLite doesn't have much of a problem with connections opened from two different processes, it may be able to deal with one of the processes being the parent of the other one.
The SQLite library is designed to deal with connections to the same DB from two different processes. The fact that one is the "parent" of the other, (which I take to mean genesis via fork() call), does not interact with that design provided that each process has made its own sqlite3_open() call into the library. It is the duplication of that "open" state via fork(), which involves locks, at least one open file and often more, and memory content, which is not designed for and certainly not tested.
All I'm trying to do is to figure out how to exactly follow the recommendation
That is simple. Do not call fork() while any sqlite3 objects are live; that is after (or while) they are created by sqlite3_open() and before (or while) sqlite3_close() makes them not live (and frees their memory.)
(12) By PaulK (paulclinger) on 2023-03-31 05:03:02 in reply to 10 [link] [source]
Do not call fork() while any sqlite3 objects are live; that is after (or while) they are created by sqlite3_open() and before (or while) sqlite3_close() makes them not live (and frees their memory.)
I'm good with this advice and in fact started with following it, but that's exactly where I ran into issues that prompted this post. I was only be able to avoid those issues if I don't close open connections before forking (but still don't use them anymore and not close them from a forked process either).
I realize that the effect can be redbean/cosmopolitan specific, but in any case it's fairly easily reproducible, so I was just wondering if I was missing something or not.
(14) By Larry Brasfield (larrybr) on 2023-03-31 05:40:42 in reply to 12 [link] [source]
... I ran into issues that prompted this post. I was only be able to avoid those issues if I don't close open connections before forking ... .
If that is true, then either: (a) your connections are not truly closed; or (b) you have uncovered an exceedingly strange SQLite bug.
The latter (b) is highly unlikely since, for the SQLite library itself, once all connections are truly closed, the library maintains no data, file handles or locks related to any DB.
Case (a) is more likely; either something outside of the library is keeping a pool of DB connections or there were still live ancillary objects associated with some connection(s) (aka "live sqlite3 object(s)") that you putatively closed.1
- ^ See the API doc for sqlite3_close() regarding the "zombie" state and when it exists due to undead associated ancillary objects such as prepared statements, BLOB streaming handles, and backup objects.
(3) By Rowan Worth (sqweek) on 2023-03-30 04:51:48 in reply to 1 [link] [source]
"recovered 10 frames from WAL file ...forktest.db-wal"
Here's the code which generates this message (from an old/ancient version of sqlite):
/* If more than one frame was recovered from the log file, report an
** event via sqlite3_log(). This is to help with identifying performance
** problems caused by applications routinely shutting down without
** checkpointing the log file.
*/
if( pWal->hdr.nPage ){
sqlite3_log(SQLITE_NOTICE_RECOVER_WAL,
"recovered %d frames from WAL file %s",
pWal->hdr.mxFrame, pWal->zWalName
);
}
But it looks like this only runs when the wal-index header is malformed. Combined with your other observations this raises suspicion as to whether the filesystem you're storing the database on is fit for purpose for concurrent access by multiple processes.
If it is not locking properly that could explain the different behaviour in the parent process when the DB is closed prior to forking. Even though you are using a new connection in the parent process after forking, sqlite knows that the process still holds certain file locks via the other connection and will proceed without trying to reacquire them.
(7) By PaulK (paulclinger) on 2023-03-30 17:24:37 in reply to 3 [link] [source]
Yes, I noticed the same thing. I added an explicit call to wal_checkpoint(CHECKPOINT_FULL)
, which somewhat addressed the issue. All the messages now reported are for 1 frame: recovered 1 frames from WAL file...
Even though you are using a new connection in the parent process after forking, sqlite knows that the process still holds certain file locks via the other connection and will proceed without trying to reacquire them.
This is an interesting point. I can't really close those "other" connections, as it causes "locking protocol" errors/issues (and it's not recommended according to FAQ anyway), but it's not quite clear what else can be done.
(11) By Rowan Worth (sqweek) on 2023-03-31 03:39:20 in reply to 7 [link] [source]
Hmm, actually the "knowledge" of locks held on open DBs will be inherited by the child process after the fork()
which may put the child process in an inconsistent state -- ie. believing it has certain locks when it actually does not (fcntl
POSIX locks are not inherited across fork()
). This also potentially contributes to the differet behaviour.
I really think the correct thing to do here is to ensure no DBs are open during the fork()
, and to dig deeper into the "locking protocol" issue to understand why it is occuring and how to resolve it properly.
(13) By PaulK (paulclinger) on 2023-03-31 05:35:25 in reply to 11 [link] [source]
Hmm, actually the "knowledge" of locks held on open DBs will be inherited by the child process after the fork() which may put the child process in an inconsistent state -- ie. believing it has certain locks when it actually does not (fcntl POSIX locks are not inherited across fork()). This also potentially contributes to the differet behaviour.
Yes, I read about this and thought that I'd avoid this issue by not "re-using" the opened connection, which seems to work.
In fact, so far I only found two cases when things go haywire:
- If I have a connection with a statement that is not finalized and that connection is used in a forked process (which is a reasonable thing NOT to do), and
- If I close the connection before forking (which goes against the recommendations in the FAQ and this thread); just closing/finalizing all its statements has no detrimental effects.
In fact, there isn't much that can go wrong as long as I don't close the connection; if I close the connection before forking, then at best I get the blocking behavior with "recovered X frames from WAL" messages and at worst I get a "locking protocol" issue. One of those two things is guaranteed to happen, which is a bit puzzling.
My advice based on the results I see so far: don't reuse DB connections that have opened statements after fork() (in either parent or child process), but don't close them either. I'll add more details to my test script to make its logic to be closer to my real application and report back, but this is what I see so far.
(15) By Rowan Worth (sqweek) on 2023-03-31 09:08:23 in reply to 13 [link] [source]
Yes, I read about this and thought that I'd avoid this issue by not "re-using" the opened connection, which seems to work.
It's not just the open connection which is relevant though, because sqlite's knowledge of what locks are held on a particular file is process-wide rather than scoped to a particular connection.
ie. if there is somehow a lock being retained on the DB or one of the journal files, after fork() the child process will retain the knowledge that it owns this lock when in actual fact it does not. Thus, when it goes to open a new DB connection it doesn't bother trying to obtain these locks because it "knows" that it already holds them. It's possible that this is why you don't see the locking protocol issue when holding the DB open across the fork -- not because things are working correctly, but because the child process has a false view of reality which results in it not interacting with a lock that would otherwise cause problems.
I will admit it is a little farfetched that this hypothesis could exist without resulting in some kind of corruption, but this test case is small enough that it may be able to get away with a lot of things that aren't necessarily correct. It doesn't make any sense to hit a locking protocol error when the database is opened separately by different processes; this is how sqlite is designed to be used. There is something fundamental missing from our collective understanding of the situation and I believe chasing the SQLITE_PROTOCOL issue will illuminate it.
Another small observation is that your lua test code deletes an existing .db file but not any existing journal files. I doubt this is responsible for the symptoms but it does allow for a test invocation to interfere with future invocations.
(this will be my last contribution for awhile, I won't be checking this forum for a few weeks)
(4) By ddevienne on 2023-03-30 08:29:02 in reply to 1 [link] [source]
Hi.
First, I've never heard of redbean ;)
Second, I assume this is *nix only, right? Not windows.
Windows wouldn't do 1M anything per-second that involves forking,
which I'm not even sure it supports (fork+exec yes; but fork alone?).
Third, are you aware there's a singleton in the SQLite code
to work around a semantic/design issue in POSIX? I worry the
logic around that singleton wouldn't be fork-compatible.
Fourth, unless you fork right after an fsync,
can you really be sure the DBs are in a valid state?
I don't know for sure. But I wouldn't rely on it.
And calling fsync 1M times won't take 1s for sure :)
But I'm mostly ignorant of SQLite internals.
So take the above with a grain of salt of course.
(9) By PaulK (paulclinger) on 2023-03-30 17:39:47 in reply to 4 [link] [source]
Windows wouldn't do 1M anything per-second that involves forking, which I'm not even sure it supports (fork+exec yes; but fork alone?).
Yes, it probably won't do 1M rps, but it does support fork in redbean, as it's based on the cosmopolitan lib implementation.
Third, are you aware there's a singleton in the SQLite code to work around a semantic/design issue in POSIX?
Yes, I looked into the documentation I could find and some of SQLite code, but couldn't see anything that would run into a conflict with what I'm doing.
Fourth, unless you fork right after an fsync, can you really be sure the DBs are in a valid state?
Not sure. Would closing all open statements help with that?
Thank you for the feedback!
(5) By MBL (UserMBL) on 2023-03-30 12:12:45 in reply to 1 [link] [source]
you may have a look in how the cleanup-after-usage is done. I use a section like the following (with Lua 5.3.6):
if db and db:isopen() then -- if previous database is still open
pcall( db.close, db ) -- then close db now
end
db = new_db -- and use the new_db instead
else
new_db = nil -- if new_db could not be opened then release probably used resources
end
end
collectgarbage("collect") -- important to avoid dangling resources in lsqlite3x53.dll
return db -- return the handle to the database in use
end
Assign the db handler or assign nil to free after closing the database and then call the garbage collector to release all previously occupied resources.
Does this help you?
(8) By PaulK (paulclinger) on 2023-03-30 17:30:28 in reply to 5 [link] [source]
I tried the same thing, but it doesn't really work for me when I close the connection from the forked process (assuming it's open in the parent process). I ended up "abandoning" that open connection and just opening a new one (as shown in my original post). Do you by any chance have a link to the rest of the code that handles this?