I am recording data quite rapidly in a SQLite Database, which results in a lot of fragments on the disc (typically 2500 frags/file). When I later on try to search through this database it takes a long time, mostly due to the fact that I have a hard disk drive (i.e., not a ssd). My goal is to get the fragmentations down to a minimum since this will increase my search performance.
I have tried setting the chunk size to maximum (~2GB), but if I exceed this amount of data it will start adding small records again, which results in fragmentations. I would like to have a specific chunk size that the database would increase when it's exceeding the current size, e.g. 2GB -> 4GB -> 6 GB. I actually though chunk size did this automatically, but does seem like it, or am I missing something? The initial size on the db is indeed 2GB.
Do I have to close the database connection in order to make the logic discover that it will overshoot and then allocation a new chunk size?
Storage capacity is not an issue for me, I can withstand some overshoot on the database size (database being e.g. 1Gb, but data is only 700Mb).
Any feedback are welcome :)