SQLite Forum

WAL mode speed vs normal comparison
Login

WAL mode speed vs normal comparison

(1) By jose isaias cabrera (jicman) on 2020-04-28 16:47:50 [link] [source]

Greetings!

I have been reading about WAL mode, and I did a search on the forum for 'wal mode speed vs normal' and got nothing. Has anyone done a speed comparison of the same WAL mode db, and a normal DB? I know it's faster, but how faster are we talking? Thanks.

josé

sqlite speed comparison report faster slower

(2) By ddevienne on 2020-04-28 17:05:48 in reply to 1 [link] [source]

How would you know it's faster? WAL is about concurrency, not speed.

How could it be faster, for a single connection? Has to potentially
re-assemble a table (or index) pages across the DB and WAL files,
while in the RBJ mode, it's only in the DB file. Maybe it is marginally
faster but really WAL mode makes sense to avoid locking across connections.

With many concurrent connections, WAL is likely to pay off, but otherwise I suspect not.
But I'd be happy to be wrong of course.

(3) By jose isaias cabrera (jicman) on 2020-04-28 17:21:11 in reply to 2 [link] [source]

How would you know it's faster?

Good question. The first point of the overview, is "WAL is significantly faster in most scenarios." So I asked. Is that point wrong?

(4) By ddevienne on 2020-04-28 17:32:31 in reply to 3 [link] [source]

An heavy-insert scenario is not faster with WAL, in my experience.
That was a while ago, but that in most scenarios comment from the doc
might be misleading a bit. My $0.02.

While many concurrent reads with occasional writes will be a lot faster,
in aggregate, because of much less contention on locks.

(5) By Simon Slavin (slavin) on 2020-04-28 17:56:49 in reply to 3 [link] [source]

There are a bunch of multi-thread, multi-processing, multi-access considerations. If you have a just one thread of one process of one program accessing the database, and a ton of memory and filespace free, WAL is significantly faster. In other scenarios which is faster depends on many things, not only what commands (read or write) you're doing simultaneously, but also things like which operating system and file system you're using.

So you can describe a bit more what you're doing. But you can get precise answers by writing your software and testing it in WAL and not-WAL journal modes. After all, it's simple to switch from one to another.

(6) By ddevienne on 2020-04-28 18:05:45 in reply to 5 [link] [source]

If you have a just one thread of one process of one program accessing the database,
and a ton of memory and filespace free, WAL is significantly faster

That's actually something I'd like to see proven somehow.
Again, I'd like to be wrong, but more meat behind that assertion is needed IMHO.

(7) By jose isaias cabrera (jicman) on 2020-04-28 18:34:03 in reply to 6 [link] [source]

If you have a just one thread of one process of one program accessing the database, and a ton of memory and filespace free, WAL is significantly faster

That's actually something I'd like to see proven somehow.

This is the reason why I asked. If the first point of the explanation of the WAL site states that it is faster in most scenarios, it would be nice to have a little side by side WAL vs Normal scenarios. Where scenarios may be:

  • 1 writer, 1 reader, 100K records,
  • 1 writer, 5 readers, 100K records
  • 2 writers, 10 readers, 100K records

then,

  • simple write, search, and replace
  • semi-complex write, search, and replace
  • complex write, search and replace

with their appropriate response speed. A nice table like that would make people love it or super love it. :-) Just thinking out-loud...

josé

(8) By Stephan Beal (stephan) on 2020-04-28 18:39:55 in reply to 7 [link] [source]

A nice table like that would make people love it or super love it.

And the table would very likely be mis-representative (or downright wrong) for every combination of OS, filesystem, physical storage type, sqlite3 version, and data workloads other than the precise ones it was tested on.

(10) By jose isaias cabrera (jicman) on 2020-04-28 18:58:27 in reply to 8 [link] [source]

And the table would very likely be mis-representative (or downright wrong) for every combination of OS, filesystem, physical storage type, sqlite3 version, and data workloads other than the precise ones it was tested on.

I guess I am not explaining the idea well. We're not defining deep storage, OS, FS, versions, etc., here. If it is presented right, a nice vanilla table should suffice. Of course, in the end, there should be a final statement with that table explaining the exercise and provide what it is trying to do: show a difference on how WAL speed is better in some scenarios than other..., etc. Please test your scenario before jumping into conclusion, etc., etc." It does not have to go deep into any specific area, but just plain vanilla SQLite WAL processing. However, it will help point #1 carry weight.

(12.1) By Warren Young (wyoung) on 2020-04-29 05:44:15 edited from 12.0 in reply to 10 [source]

We're not defining deep storage, OS, FS, versions, etc., here

In that case, we can create a table of values for you which is clear, simple, and wrong.

All items on that list can affect the results significantly, and three of the four can affect the results by an order of magnitude or more:

  1. storage: This is a complex field all by itself — lone HDD vs RAID vs LVM vs SAN vs SecureDigital vs m.2... — but even that aside, there are single aspects of storage that can give an effect orders of magnitude in difference, such as rotating vs solid state: 7200 RPM divided by 60 seconds per minute divided by 2 rotations per transaction yields 60 transactions per second, absolute max, hard-limited by physics, yet a lone modern SSD may achieve 10000 TPS, and a RAID of them may achieve a million TPS.

    You may then object, "But doesn't all of that net out to zero if the test is done on the same storage medium?" Answer: No! Why? Because the factor "2 rotations per transaction" assumes a particular fsync behavior and access pattern in the benchmark. Change those parameters and now you get a different physics-based limitation for each concurrency mode.

  2. OS: Bare-bones Linux vs. out-of-the-box Windows probably gives the advantage significantly to Linux, simply because Windows ships with antivirus active while Linux does not. And if we're talking about consumer Windows as shipped from a given PC manufacturer, or about Windows as preconfigured by site IT, then we may be talking about orders of magnitude of slowdown.

    And once again you may object that this should net out to zero, but it is within the scope of antimalware software to treat a collection of three files on disk (DB, WAL, lockfile) differently than a single file (old-style single SQLite DB file.)

  3. FS: If that didn't affect anything, then why do the bars flip-flop in this benchmark? That has a direct effect on this wish of yours, since it means the way you construct the benchmark affects which FS has the advantage. I can also tell you from experience that the duration of the benchmark can affect things, with performance dropping off at a nonlinear rate the longer the benchmark runs due to RAM exhaustion, O(log₂) data structure depth, etc.

    And again, all of that affects the results you get in this simple table since a collection of 3 files with a different fsync pattern can easily give different runtime performance based on the FS used. ZFS, for example, treats fsync considerably differently than ext4.

  4. versions: I guess you mean SQLite versions. Yet if that has no effect on things, then why are we talking about WAL vs non-WAL here at all? Shall we compare SQLite 3.0.0 non-WAL with SQLite 3.31.1 WAL?

If you want a constructive answer to this wish, then the best I can come up with is to publish a family of results so that there is at least some hope that the reader can interpolate a plausible value for their local access pattern.

I'm suggesting hundreds of tests here, arranged on a multidimensional axis. It'd be a fair bit of work just constructing the data browser for the result set.

We await your results. :)

(9) By Keith Medcalf (kmedcalf) on 2020-04-28 18:56:17 in reply to 7 [link] [source]

It entirely depends. And like everything else, the "total" time is mostly the same for the same amount of work, you are just calling in a plastic surgeon to do a "little nip and tuck here in the belly" and "let it hang out more over there in the ass".

It also depends whether you are talking about a 4 page database of a 4 trillion page database. Or perhaps somewhere in between. It also depends how your I/O is scattered across those 4 trillion pages. And how good or bad (Windows falls in the canna-do-at-all category) is at doing I/O, particularly scatter-gather I/O.

So WAL means WRITE AHEAD LOGGING.

In a "normal" database, writes are made randomly directly to the database file itself, and copies of the "pre-change" pages are written to a log file. Committing a transaction means deleting the log. Reads are always direct operations against the database file. One connection cannot "read" the database at the same time another is "writing" to it.

In a WAL database, writes are made sequentially to a "difference file" (not randomly) -- and sequential writes are faster than random writes (this is where the plastic surgeon is doing the nip and tuck). All reads are double indirect because they have to check for changes in the "difference file" and therefore operate somewhat more slowly than not having to do so (this is one of the places where the plastic surgeon lets the flab hang out). Occasionally the database needs to be "checkpointed" by copying the sequential pages from the WAL file to the database (read sequential write random). This is the other place where the plastic surgeon lets the flag hang out. It is also where all the extra inflamation comes from. However, you now have the advantage that a connection can "read" from the database at the same time as another connection is writing to the database (because it is not writing to the database, it is writing to the "difference file").

So, as you can see, it is both faster and slower all at the same time.

(11) By jose isaias cabrera (jicman) on 2020-04-28 19:07:13 in reply to 9 [link] [source]

... you are just calling in a plastic surgeon to do a "little nip and tuck here in the belly" and "let it hang out more over there in the ass".

I am rolling on the floor laughing. Thanks, Keith. This made my day. Ha ha ha ha! Wait, let me laugh in Spanish. It's more fun: Ja ja ja ja ja...