SQLite User Forum

Writing to multiple databases in same time
Login

Writing to multiple databases in same time

(1) By azrele on 2022-12-20 12:28:51 [link] [source]

In perspective of blocking the database while writing into it, I want to ask advice for database organization.

Im currently using portable MySql, and I want to transfer to SQLITE bc of corruptions in data in MySql.

I've read that database cannot be used while writing to it. I'm logging temperatures of 40 sensors every minute to database I also use database to store and read small data

Will be some benefit to separate logging database from "normal" database? (so "normal" database will not be blocked while other is logging data)

Can I actually wrtite to logging database, and in same time write(or read) to "normal" database?

Thank you

(2) By anonymous on 2022-12-20 14:32:00 in reply to 1 [link] [source]

Will be some benefit to separate logging database from "normal" database? (so "normal" database will not be blocked while other is logging data)

Yes, this can reduce contention between the two. With the SQLite library, your code can connect to more than one database at a time, in the same connection, and issue updates and queries to each database. You could also use a different connection for each database, within the same program. Or use multiple programs, one for each "writer", each with its own database connection.

So, in the absence of other constraints, yes, you can write two databases at literally the same time. Suppose, for example, you had two entirely separate programs, one writing to the log, and one reading and writing the main database. If the two databases are on different storage devices, then the disk writes could be occurring at literally the same time.

Of course, we don't know what other constraints you might have, nor how literal you need to be about "the same time". Taking those things into account, keeping two separate databases may or may not be practical for you in this case.

(3) By Keith Medcalf (kmedcalf) on 2022-12-20 16:13:42 in reply to 1 [link] [source]

Note that you can use WAL journalling which will permit writes to proceed without blocking readers.

I would also point out that your solution is not really a solution at all -- eventually you have to "write" your data to the "normal" database, and there is no difference between writing "now" and "waiting a while", other than the delay.

(4) By cj (sqlitening) on 2022-12-20 19:44:39 in reply to 1 [source]

Consolidate the 40-readings in memory and write once?

(5) By pcreso on 2022-12-20 20:37:20 in reply to 1 [link] [source]

Have one writer process that opens a connection, writes, closes connection. If the transactions are fast enough, there will be no perceptible delay during these writes. Buffering writes so that all 40 sensors get written in a single transaction would give you a few milliseconds per minute where other db access may be locked out.

I haven't tried this, but Sqlite3 allows you to attach an external db. If you are reading from a local db with the write one attached, & the writes are only to the attached db, that might reduce lock congestion, but give the appearance of writing to your read db.

(6.1) By Keith Medcalf (kmedcalf) on 2022-12-20 21:57:10 edited from 6.0 in reply to 5 [link] [source]

Have one writer process that opens a connection, writes, closes connection

Do NOT do this. This is very VERY VERY bad advice.

Open the connection once, when the collector starts.

Every time you have a full set of observations to write do the following:

BEGIN IMMEDIATE;  
INSERT ... - all the observations one after each as fast as your little computer can loop  
COMMIT;  

Go back to collecting up another full set of observations.

When your collector is finished, close the connection.

Insertion of 40 records will take less that one tick (16 ms) out of each minute, leaving 3749 ticks for you to be doing something else. (That is a write duty cycle of 1/3750, which is a very very very very low duty).