Clear shared memory DB and backup problem
Last days I introduced some kind of server cross thread cache based on memory db (file:db_file?mode=memory&cache=shared).
All works fine, but I use backup() function to store this shared memory DB from another sqlite3 instance.
And here is a problem: when I refresh cache, I use backup() again on same existing shared memory DB. I observe some memory increase (probably some leaks or not cleared existing DB problem). I assume that should be no memory increase because I backup again same data. I would like to achieve static memory while putting same data using backup() or other functions to shared memory DB.
Is any way to completely clear memory under (file:db_file?mode=memory&cache=shared). What are options to remove everything from existing DB or even just destroy current shared memory DB? I tried to clear it using some tricks: https://stackoverflow.com/questions/21547616/how-do-i-completely-clear-a-sqlite3-database-without-deleting-the-database-file I also run VACCUM after backup but still I observe memory increasing.
(2) By Gunter Hick (gunter_hick) on 2021-07-19 07:01:38 in reply to 1.1 [link] [source]
"I assume that should be no memory increase because I backup again same data." Your assumption is wrong. What you are seeing is called "fragmentation". This is characteristic of every storage scheme for variable size objects in spaces of fixed capacity, even for the stuff in your garage. The space used will increase more and more slowly until a balance point is reached, where "deleted" space ist being re-used at the same rate as it is being created. This is charaterized by the increase of space usage declining over iterations. I.e. asymptotic space usage. A true leak would manifest itself in a constant increase of space usage for each iteration. I.e. linear space usage
That's right, but in some point it should be stabilized (like you mentioned - balanced point). I don't expect that memory will match exactly.
Using it this way (like described above) I see almost continues increase over time. There are some visible memory cleanups (probably memory defragmentation) but it still grows.
The question is how to clean up shared memory DB before backup. https://www.sqlite.org/backup.html says that destination data is overwritten but I observe continuous increase od memory size used.
I also will check VACCUM INTO option as alternative to backup data.
I saw reset config param in C API that resets current database.
Is there a ways to do something similar in python?
If the Python interface that you want is not available in pysqlite2 (sqlite3 wrapper) because it only pretends "minimal commonality" and only allows access to features which are available in every other db-api wrapper. pysqlite2 does not support features of sqlite3 that are not common with DB2, Postgres, MySQL, and SQL Server.
If you want something for Python that will allow access to SQLite3 as if it were SQLite3 then the use of a wrapper that is not limited to the "minimal common db-api" interface and exposes the entire functionality of SQLite3 may be perspicacious.
You might want to look at APSW: https://rogerbinns.github.io/apsw/