SQLite Forum

Access strategy for a one-thread per connection http server
Login

Access strategy for a one-thread per connection http server

(1) By anonymous on 2021-04-19 02:05:07 [link] [source]

For a (moderate traffic) http server that handles each http connection using a dedicated thread, I'm wondering what are good sqlite access strategies and their tradeoffs among these:

  1. Use multi-threaded mode and create a new db connection per http connection.
  2. Use multi-threaded mode and have a db connection pool to draw from for each http connection.
  3. Use serialized mode and one db connection shared by all the http connections.
  4. Something else I missed ?

I'm curious if anyone has any experience to share regarding up to which extent the different approaches scale.

Now assume you also add on top of that a prepared statement cache like the tcl API has then for 1) the cache would certainly be mostly useless but for 2) that would be a good fit. Regarding 3), it would prevent independent http connections to make the same requests concurrently but I'm wondering whether it is possible to copy the struct of a prepared statement so that it can be bound by two different threads (it seems there is no dedicated API call for that) ?

(2) By Keith Medcalf (kmedcalf) on 2021-04-19 02:59:36 in reply to 1 [link] [source]

Do you mean "multi-threaded mode" as curtailed-speak for "one connection per thread with interlocks disabled"?

What exactly do you mean by "serialized mode" and how do you plan to handle transaction contention between multiple threads on the same connection?

(3) By anonymous on 2021-04-19 06:56:52 in reply to 2 [link] [source]

Do you mean "multi-threaded mode" as curtailed-speak for "one connection per thread with interlocks disabled"? [...] What exactly do you mean by "serialized mode"

I use the terms multi-threaded mode and serialized mode exactly as documented here.

(5) By Keith Medcalf (kmedcalf) on 2021-04-19 08:28:42 in reply to 3 [link] [source]

Unless you specify "SINGLETHREAD" then multiple threads are allowed, provided you follow the single entrance per connection requirement. The difference between "serialized" and "multithreaded" is that the latter turns off the built-in belts and suspenders saving a few microseconds per library call, but will also result in application misbehaviour if you have not properly followed the single entrance per connection requirement.

That is, an application which works properly in multithreaded mode will also work properly in serialized mode, though an application which works properly in serialized mode may not work in multithreaded mode.

(6) By anonymous on 2021-04-19 13:11:21 in reply to 5 [source]

Thanks for your answer, but to me that was kind of clear from the documentation I linked to.

Maybe my initial question could be reframed as follows:

a. What is the actual overhead of opening fresh database connections ? This influences 1. vs 2. or 3. Without any clear answer I should likely measure myself, I was just being lazy :-)

b. In serialized mode. What kind of concurrency do you get from a readers/writers perspective with a single connection, e.g. are multiple reads worked out in parallel ?

c. In serialized mode. I couldn't find in the docs what happens with prepared statement if multiple threads may try to use concurrently (say one resets it, when another one is using it) and if there's an easy way to copy the work done for preparing the statement so each thread can simply grab its own copy to work with.

However now that I read the isolation page (maybe that page should be linked from the thread-safe page) I think 3. is really not a good idea. Isolation is good :-) and questions b) and c) are moot.

(7) By Warren Young (wyoung) on 2021-04-19 13:32:37 in reply to 6 [link] [source]

What is the actual overhead of opening fresh database connections ?

The entire database schema has to be re-parsed, for one thing.

That's another good reason to adopt a connection pooling strategy alongside a thread pooling strategy. Create only $CORES × 1.5 SQLite conns + threads or so, then reuse them rather than re-create them on each HTTP hit.

If this is a web app you're talking about, realize that the HTTP hit count can be as high as one per static asset on the page, plus one for the basic HTML page, plus one for each dynamic HTTP hit. This can be a hundred hits per "page", easily. You really really do not want to be starting and stopping a thread + SQLite conn per HTTP hit.

are multiple reads worked out in parallel ?

It depends on the concurrency mode.

For your application, you probably want WAL mode. Its concurrency behavior is described in that document. (§2.2)

If you need rollback or another SQLite concurrency mode, then the concurrent performance characteristics differ.

I couldn't find in the docs what happens with prepared statement if multiple threads may try to use concurrently

I'd expect it to crash each time the underlying SQLite schema changed out from under the program, causing SQLite to re-parse each prepared statement still active, thus causing race conditions in all the other threads also using that prepared statement.

So: don't.

(8) By Keith Medcalf (kmedcalf) on 2021-04-19 19:02:06 in reply to 6 [link] [source]

In serialized mode. What kind of concurrency do you get from a readers/writers perspective with a single connection, e.g. are multiple reads worked out in parallel ?

Exactly and precisely the same concurrency as you would get with multithreaded mode. Only one concurrent entrance per connection is permitted. The difference is that with the default mode serialized the built-in belt and suspenders will enforce this requirement, ensuring that there is only one concurrent entry per connection -- the second (and subsequent comers) will have to wait. In multithreaded mode the library does not protect itself against you violating the single concurrent entrance per connection requirement and thus if you do so, all hell might break loose at your own peril.

In serialized mode. I couldn't find in the docs what happens with prepared statement if multiple threads may try to use concurrently (say one resets it, when another one is using it) and if there's an easy way to copy the work done for preparing the statement so each thread can simply grab its own copy to work with.

This cannot happen since you can have only a single concurrent entrance per connection, and a statement is a child of a connection.

That is to say that a statement is a single context prepared from a single connection context. You may call this one single context sequentially from multiple threads but it is a single context. If you attempt to enter the same context simultaneously from multiple threads then what happens depends on whether you have "belts and suspenders" turned on.

You cannot execute the same statement context concurrently (there is only one).

(9) By anonymous on 2021-04-19 19:59:38 in reply to 8 [link] [source]

Thanks all for your inputs. Things are getting clearer now.

(4) By Warren Young (wyoung) on 2021-04-19 07:40:34 in reply to 1 [link] [source]

moderate traffic…each http connection using a dedicated thread

How many HTTP servers designed to operate at scale — your qualifier, mind! — are you aware of that do that?

There's a reason the number is small: the default stack size on "server" class OSes tends to be up in the megabytes. With only 1000 simultaneous conns — a value that some may well consider "moderate" — you're chewing up gigs of RAM just for the per-thread call stacks.

You can lower this value at compile and/or at run time, at an increased risk of blowing the stack if you choose a value that's too low, but that just brings you to the next reason no serious HTTP server does this: managing thousands of active processes wastes time on context switches.

A far more sensible design is to set up a thread pool that's no larger than a small multiple of the CPU core count. For most workloads, a multiple of 1-2 is as high as you want to go. No higher value gives greater performance, and beyond a certain limit, the overheads start actively eating into whatever performance you've gained by going with a multi-threaded design in the first place.

To bring this back on topic for the forum, you would have a single SQLite conn per thread, temporarily lent to the HTTP I/O handler as part of everything else it's temporarily assigned from the pool.

Now realize that you're not far off of asking one of the lowest-numbered items in the SQLite FAQ. Why do you suppose it got answered so early, and in quite that way?

That brings us to Major Option #2: go with an event-driven design and skip threads entirely. If your notions of "moderate" and "scale" are low enough, you may never even peg a single CPU core, and until you do, there's not a whole lot of point taking on the pain a multi-threaded design will cause you.

Yes, will, not may or might. Read the paper linked from the FAQ item before you even think about coming back with an argument against that claim. And if you do decide to, realize you're going up against a Berkeley CS prof in doing so.