SQLite Forum

Best way to observe database operations
Login

Best way to observe database operations

(1) By example-user on 2020-05-26 13:03:27 [link] [source]

I have two process writing to a single SQLite file.

Id like to get an event/notification of any insert updates or deletes per table when the other process writes to the file.

Whats the most efficient way to do this?

E.g. Process A inserts x rows, Process B gets an event that x rows were inserted. (and the same for updates and deletes).

I think it would be a combination of triggers to log any create/update/delete operations, and using the OS to watch for file changes?

Questions: A. Is there a version number for the SQLite file so I can detect that another process has written to it (ignoring the current processes changes).

B. Is this possible to do in a general way without editing the database schema or using triggers?

(2) By Willem (sn0wbl1nd) on 2020-05-26 15:13:35 in reply to 1 [link] [source]

"Processes", so different threads. Compile for multi-threading and use the pre-update hook.

See also:

https://sqlite.org/threadsafe.html

https://www.sqlite.org/capi3ref.html#sqlite3_preupdate_count

(5) By example-user on 2020-05-26 18:04:27 in reply to 2 [link] [source]

My understanding is (please correct me if Im wrong):

OS processes and threads are distinct. A single OS process can have many threads. Threads within a single process can share memory (and SQLite C pointers to connections).

Two processes cannot easily share C pointers.

Thanks for the sqlite3_preupdate_hook suggestion - I think this only works on the connection it is set on though? If I have another process with another connection it will not fire?

(6.1) Originally by Willem (sn0wbl1nd) with edits by Richard Hipp (drh) on 2020-05-26 18:44:21 from 6.0 in reply to 5 [link] [source]

If they are in different processes they are certainly on different threads, therefore.

Yes, hooks only act on the handle for which they were configured, but that is what you want I think. You open the database twice:

db_client <-- sqlite3_open_v2()
db_sys <-- sqlite3_open_v2()

Client opens db_client in process A. Install preupdate hook on db_sys in process B. No need to share pointers. The sqlite3 library will do the rest.

(9) By example-user on 2020-05-26 19:31:49 in reply to 6.1 [link] [source]

Ill try it but according to these links update hooks only work within a single process (in your example my db_sys hook would not fire on writes to db_client?).

https://stackoverflow.com/questions/48378859/sqlite-update-hook-in-a-multi-process-application https://news.ycombinator.com/item?id=19469033

(12) By Willem (sn0wbl1nd) on 2020-05-26 19:48:03 in reply to 9 [link] [source]

Oh I see. Yes, that's right. As far as I know you cannot "tag along" with a process that opened the db file separately. Triggers would probably be the way to do that.

I do think the better alternative is to have your client interact via an instrumented handle, but that is probably because I have limited experience with implementing triggers.

(3) By Warren Young (wyoung) on 2020-05-26 17:06:17 in reply to 1 [link] [source]

I have two process writing to a single SQLite file.

Are they wholly unrelated, or can they set up some sort of IPC channel?

If so, then my first choice here would be to create a message queue so that the DB modifying process can tell the DB reading process, "Hey, I just added record IDs 42 and 43!" or "Hey, record ID 44 just got nuked."

Several of the common message queue implementations are asynchronous and persistent, so both processes don't even have to be running for this to work. One side just stuffs its message into the queue, and the other pulls messages out in its own time until the queue is empty. This is not only convenient, it creates a decoupling point that can improve system scalability and stability as compared to a superficially similar system based on synchronous blocking IPC.

...using the OS to watch for file changes?

If you had to do it that way, then I'd build a message queue in SQLite. A common sort of scheme is to create a "work queue" table with timestamps that get updated on work item creation, work start time, and work completion. You index these columns so it's efficient to ask the DB questions useful to your application.

For example, you may wish to ask the work queue, "What work items have been added since the last time I checked, '2020-05-26 15:54:58'?" or "Give me a list of work queue items where neither the started-time nor the finished-time column are NULL."

If these suggestions are entirely inappropriate to your application, that is because you've posted an XY Problem. Instead of telling us how you expect the solution to be accomplished, tell us the actual problem and what you've tried in solving it.

(4.1) By example-user on 2020-05-26 18:30:51 edited from 4.0 in reply to 3 [link] [source]

You're right that I asked an XY question, but the reason is Y is a general solution/tool that will let me apply it to a large set of specific X's. The general X is:

IPC without a server, message bus, or another process to receive messages.

You have two processes, one writes to a SQLite file using vanilla SQLite. The other process must respond and run a function based on certain create/update/delete operations.

I do not have control over the first process, but I can use any code in the second process.

As an example, the first process could be a GUI database tool, and the second process could be something updating a remote store. The user deletes a row in the GUI, the second process detects this from the file, and deletes something in a remote store.

Using vanilla SQLite is important in the first process as it would allow any SQLite library to write to the file.

I think the issue is that unless the first process records the change event, the second process will not know what changed so will need to run an expensive diff operation by checking all the rows against the older snapshot.

This is mostly an experiment.

(7) By Warren Young (wyoung) on 2020-05-26 18:35:24 in reply to 4.0 [source]

It sounds like you're trying to reinvent distributed DBMSes here. Why not use an existing system such as Bedrock?

At the very least, study its design and understand why it is designed that way before you begin work on your reinvented wheel.

Distributed computing is seriously hard.

(8) By example-user on 2020-05-26 18:58:46 in reply to 7 [link] [source]

Im just trying to take SQLites idea of "will persist data without a server", and add on top "will respond to write events to the SQLite file". All within a single OS - no distribution.

I think maybe I confused things with the example - the "remote store" may just by another GUI that gets updated with whatever is in the SQLite file - like a live stats dashboard.

I think there are only two ways to do this:

  1. Modify writers to write diff events to the file.

  2. Compute diff events in the reader by some type of comparison with an older snapshot.

I do not want to do 1 because that means I cannot observe SQLite files from other software and respond to their writes. Bedrock is in category 1 above - each node writes its diff events to the file/system.

Im asking if there is an alternative to 2 as it does not sound feasible for large databases.

(11) By Warren Young (wyoung) on 2020-05-26 19:44:09 in reply to 8 [link] [source]

like a live stats dashboard.

You may find that simply re-generating the stats view from the current version of the DB is really fast.

(And if not, you probably needed to optimize that query anyway!)

That's not idle speculation. This very forum software has some statistics-generating pages that are dashboard-like, which work by querying a SQLite DB. I just hit a fairly heavy one of these, which did all of the server-side work in 12 ms.

Therefore, you may not actually need instant notification: the reader and writer are on the same machine, so just have the reader re-read the current data from the written DB on each screen repaint. A typical status dashboard updates, what, once a second at most?

I wouldn't be surprised if the cost of the DB hit is actually less than the cost of the GUI update in this case: all of the recently-updated DB data is still in cache, after all.

(13) By example-user on 2020-05-26 20:08:42 in reply to 11 [link] [source]

Yeh I think that is probably the easiest route.

Reader:
- Query tables, store results in RAM.
- Watch the file for changes.
    - On change, re-run query, diff with RAM results.
    - Take actions on diff result (like update the view).

Writer:
- Write to tables as normal.

(14.1) By Warren Young (wyoung) on 2020-05-26 20:58:39 edited from 14.0 in reply to 13 [link] [source]

On change, re-run query, diff with RAM results.

I don't see the need for the diff. Just repaint with the current info any time the DB changes, as long as the time since last update is greater than the shortest allowed repaint time.

That lower limit should probably be no smaller than about 50 ms, the smallest update rate that a human can readily notice. In specialized cases, humans can be faster, but it is also the case that too-rapid updates are hard for humans to track. Thus DMMs, which rarely update more than 3 times a second.

For some "dashboard" type operations, you have no choice but to recompute the displayed results from the full data set on each repaint anyway. For instance, if you're showing a running average of some parameter, you need to recompute this over the entire history back to the cutoff point when a new data point comes in.

To take a simple case of a regular average:

  sqlite> create table data(n);
  sqlite> insert into data(n) values (1),(2),(3);
  sqlite> select avg(n) from data;
  2.0

This gives the correct result, 2.

Now append 4 to the data array and recompute:

  sqlite> insert into data values(4);
  sqlite> select avg(n) from data;
  2.5

My question then to you is, using a diff-based algorithm, how do you take only the 2.0 from the first result and the new data point (4) and still get the new correct average, 2.5? Unless you know some trick of arithmetic I don't, you can't: you need all four data points to compute the correct answer.

By the way, that 12 ms result I mentioned above? That was on a "chilled" DB: not fully cold, but not fully prepared to answer that particular query from RAM, either. Re-running it twice more caused the results computation to occur first in 8 ms and then 7.2 ms.

(15.2) By example-user on 2020-05-26 21:55:16 edited from 15.1 in reply to 14.1 [link] [source]

I meant computing the difference between two queries.

So the first time you run a query that populates the view initially.

Subsequent changes only update the part of the view that changed.

In the select avg(n) from data case:

  • initial: {a: 2}
  • diff: {update: {a: 2.5}}
  • diff ...
  • diff ...

I am thinking of views in terms of how they work on the web - you have a tree-like document. When your data changes you mutate the smallest number of nodes.

So its like using https://en.wikipedia.org/wiki/Copy-on-write for both the view and the JSON-like structure in your language. If you have a diff event stream you can update only what has changed and keep the unchanged portion immutable.

I think you're using the view model where you paint every pixel on the canvas, which means re-generating 100% of the frame.

(16) By Warren Young (wyoung) on 2020-05-26 21:49:42 in reply to 15.0 [link] [source]

initial: {a: 2}... diff: {update: {a: 2.5}}

Sure, but who computes the "update" node, and from what data?

No matter what, for this particular case, someone has to do a full table scan each time an update is needed.

It's likely less efficient to do it on the producer side anyway. If the DB is being updated 100 times a second, you don't want 100 "update" events in the DB per second computed from a table scan per, you want ~33 new rows added per third of a second with one table scan per made on the consuming side.

...the web...mutate the smallest number of nodes.

That's an inapt comparison.

A web server is typically 10-20 ms away, so anything you can do to reduce the number of round trips, you should absolutely do.

Your SQLite DBMS is orders of magnitude closer to your code. With sufficient RAM and an SSD to back it, I'd expect at least 3 orders of magnitude improvement for equivalent work.

Networks are SSSSLLLOOOOWWW.

...keep the unchanged portion immutable.

Laudable in principle, but only worth doing if it's actually a net improvement, all things considered.

Part of "all things" is developer time.

(17) By example-user on 2020-05-26 22:08:47 in reply to 16 [link] [source]

but who computes the "update" node, and from what data?

I was thinking of a light layer over the SQLite FFI, so something like:

watch_query("select..", handleEventFunction)

This would be the same process, but possibly different languages (E.g. C inside a Python program).

I think there would need to be ratelimiting. If there are 100 writes per second you may want to try and wait until the writes stop and then re-run the query.

Laudable in principle, but only worth doing if it's actually a net improvement, all things considered.

This is how React and most other front end JS works by default. Most Google web apps do this for example. It is worth doing because the API between the JS VM and the DOM is slow compared to a JS function just updating a JSON structure - so React computes the smallest diff, then uses the slow DOM API to write changes all in one batch.

A web server is typically 10-20 ms

The minimum to the closest CDN is likely 35ms. But any solution has to accept this.

Personally I prefer the diff based API because:

  • If your result set has 1000 rows, and only a few change, you are only sending the changes over the APIs.
    • SQLite -> Host Language -> Network, maybe -> View

(18) By Keith Medcalf (kmedcalf) on 2020-05-26 22:41:46 in reply to 14.1 [link] [source]

The algorithm is called "Successive Approximation to the Mean" and is one of the few accurate ways to calculate a floating point average that does not have the significant pathological behaviours associated with the schoolboy sum/count method.

let x be the sample,
let n be the count of samples,
let m be the current approximation to the mean (average)

then, for the first sample
m = x
n = 1

for each new sample to be incorporated in the mean
n = n + 1
m = m + ((x - m) / n)

for each prior sample to be removed from the mean
n = n - 1
m = m - ((x - m) / n)

Step 1: x is 1:
n = 1
m = x = 1

Step 2: x is 2
n = n + 1 = 1 + 1 = 2
m = m + ((x - m) / n) = 1 + (1 / 2) = 1.5

Step 3: x is 3
n = n + 1 = 2 + 1 = 3
m = m + ((x - m) / n) = 1.5 + ((3 - 1.5) / 3) = 1.5 + (1.5 / 3) = 1.5 + 0.5 = 2

Step 4: x is 4
n = n + 1 = 3 + 1 = 4
m = m + ((x - m) / n) = 2 + ((4 - 2) / 4) = 2 + (2 / 4) = 2 + 0.5 = 2.5

You can compute most statistical functions using running computations as well, including the various Means (arithmetic, absolute, RMS, logrithmic, etc), Kurtosis, Skew, Variance, and Standard Deviation, and probably more. (Though except for the arithmetic mean/average, the only the intermediates are "running", you need to do a final calculation to get the actual final result from those intermediates whenever you need it).

(19) By Andreas Kupries (andreas-kupries) on 2020-05-26 23:05:09 in reply to 18 [link] [source]

Welford's incremental variance

A bit later on the same page also a few higher order statistics.

Also John D Cook

(10) By Willem (sn0wbl1nd) on 2020-05-26 19:32:38 in reply to 7 [link] [source]

Thanks! That is really cool.

(20) By anonymous on 2024-02-09 05:06:13 in reply to 1 [link] [source]

So, in a search to do something similar, I ended up stumbling onto this thread, before I found a reasonably acceptable answer.

I need notifications of database changes to update a user interface with fresh data. Basically, turning an already existing app that uses a SQLite database into a real time data server, and I don't want to go and put new code into the existing app, because there's no way it's going into the main app, and I don't want to fork it and have to maintain that.

So, here's what I'm doing:

At my server's startup, I open a read only connection (i wouldn't want to accidentally write and stomp on the other app's data!), and issue the statement PRAGMA data_version;

This will initially return data_version 1. Now, as long as I keep that connection open, if anyone else that isn't me writes to the database, the next time I do PRAGMA data_version, I will get back a data_version that is apparently incremented for each commit that has occurred on another process (the app that originally created this database).

So, to avoid hammering the sqlite database, and my listening clients, I believe this is the "most performant" method --

Do an operating system level watch on the directory that contains the database file. When I detect a rename operation performed on the "filename.db-journal" file, then it is an indicator that something has happened. I then fire off a PRAGMA data_version statement, and compared the new data_version to the last one. If it's different, then I perform a full query -- through the original application, not just by accessing the database (i don't want to have to maintain any SQL!!) -- of all the data that my server is interested in.

Currently, whenever that update happens, I just shuffle the entire set of data directly to the client. To make this more performant, I will be implementing a few more steps, but this is just "exploratory" level code right now.

Additional steps:

  1. debounce the database access by a second or so -- I don't need instantaneous responses, so I can afford to wait a second for database operations to stop before sending my query
  2. cache the previously read data, and do a deep comparison between the current database information, and the old, and only send clients of my server changes to the state, instead of dumping them the entire state

Et Voila, a non-realtime REST API backed by a SQLite database store, is now a nearly realtime API that will respond within a second or so to all changes that we care about.

(21) By Stephan Beal (stephan) on 2024-02-09 10:04:22 in reply to 20 [link] [source]

When I detect a rename operation performed on the "filename.db-journal" file...

Note that that file will be named filename.db-wal if the db is in WAL mode.

(22) By ralf (ralfbertling) on 2024-02-09 13:12:49 in reply to 20 [link] [source]

You can also use the capturing method described in https://sqlite.org/undoredo.html

A simple SELECT max(rowid) versionId FROM undolog

would help to detect changes. You might want a different method to capture schema changes, but in most cases a full scan of SQLite_Schema should be reasonably fast.

You can adapt that to get the granularity of changes you want to log/monitor.

Cheers, ralf