How to optimize writing from several processes?
I have a "run", which consists of several processes. It takes up to several hours to complete a run and produce up to tens of GB of data which I want to eventually be in a single new SQLite database.
Each process only writes and no one will be reading until all processes terminate. Writing is in chunks, there could be up to tens or hundreds of thousands chunks per process in a run. Chunks are produced by a single thread.
In this scenario, do you think it would be faster for each process to write to its own database in a single transaction from start to finish and then join those databases into one vs writing to a single database with synchronization? The first option looks definitely simpler.
Are there UNIQUE constraints or FOREIGN KEY constraints? Or are you just blindly appending to the end of each database file?