Database Growing by itself to huge sizes
(1) By Goravani (DasGoravani) on 2020-07-25 17:36:55 [link] [source]
Hello everyone, I use Macs. On one it's Catalina and on the other its Mojave OS's. Use is single user. I have a few places where I use the SQLite db. In both places it is growing by itself quite fast and quite large. My actual data is 350 MB and after a fresh import this is how big the db is. But it just sits there.. nothing is being done, and it grows.. now it's up to 8 GB in size.. at this point I am not live, so it's not a problem to just replace it with the fresh after import db.. but once I go live, this will be a problem. Does anyone have any ideas about what would make a SQLite db just simply grow? I have used the analyzer and am now aware of which table it is that is doing it.. and it makes sense.. that is my largest record.. it's a record that is my constants file.. there is only one record, supposedly, in that table.. it holds many pictures and lists too.. it is a huge record, I dont know how big but lets say it's a MB large.. one MB.. maybe 2, but not 8 GB The odd thing too is that when I use VACUUM to try to shrink the db it grows a little.. instead of shrinking. So I'm at a loss as to what is going on. Otherwise than this I am loving SQLite. Thank you Das Goravani
(2) By Richard Hipp (drh) on 2020-07-25 17:59:32 in reply to 1 [link] [source]
[T]he SQLite db ... it is growing by itself ...
No it isn't. Unlike every other RDBMS (that I know of) SQLite does not have its own threads or processes to look after things. SQLite, by itself, is inert. Action is only taken by SQLite when you invoke one of the SQLite APIs. And when that API call returns, all changes stop. Nothing happens to an SQLite database except when an application thread is actively running an SQLite API SQLite does not have any background threads to work on the database. So it is simply not possible for an SQLite database file to grow "by itself".
If the file is growing, that is because something, somewhere is invoking SQLite APIs, or other system interfaces, to add content to the file.
(4) By Goravani (DasGoravani) on 2020-07-30 00:45:47 in reply to 2 [link] [source]
Thank you for your reply. I found it helpful.
The thing that is odd is that there were once when I checked only 3 records in that table. They are small and cannot account for the many gigabytes they were taking up according to SQLite analyzer, the command line tool. That is what has me baffled. I would take responsibility if I had a repeat loop of inserts going and if at least I could see those records, but they aren't there, only inflated size that VACUUM fails to remove, oddly.
That is why I am baffled.. do you have any insights on that? Thank you very much.
(5) By Larry Brasfield (LarryBrasfield) on 2020-07-30 01:21:41 in reply to 4 [link] [source]
There are only a few possible causes of the growth you report, when categorized thusly:
Your program is doing it.
Some other program is doing it.
If it is your program, you will do well to run it under a debugger and put breakpoints on the sqlite3_prepare_v2(), sqlite3_step(), and sqlite3_reset() entry points to see from where and how often they are being called, which must be a mystery to you now since you believe much less such activity is occurring.
If it is some other program, you will need to investigate tools on BSD, (which Mac OS wraps with shiny UI), that report open file handles. Obviously, if it is not your program, some other program must be accessing the file DB unbeknownst to you.
You can distinguish between cases 1 and 2 by not running your program for awhile and observing whether the mysterious growth ceases. If it ceases, your program was responsible. Otherwise, some "other" [a] program is busy and must be hunted down.
[a. I say "other" because we cannot rule out that it is one or more processes executing your program for reasons you do not yet fathom. ]
(6) By Keith Medcalf (kmedcalf) on 2020-07-30 01:40:20 in reply to 4 [link] [source]
The thing that is odd is that there were once when I checked only 3 records in that table
So from that I take it that now there are not only 3 records in that table. The SQLite3 Analyzer tells you how many records are in the table and the size of the payloads of those records.
How do you know how many records are in the table?
How do you know that they are "small"?
(3) By Simon Slavin (slavin) on 2020-07-25 19:19:25 in reply to 1 [link] [source]
Das, the only way you're going to get to the bottom of this is to dump the data from that table and see what part of it is growing.
You might find this function useful:
(7) By TripeHound on 2020-07-30 09:48:01 in reply to 1 [link] [source]
I don't know if it's relevant to your problem, but there was an email-chain from 10 Jan 2020, titled "Unexplained table bloat" (which doesn't appear to be in the forum) with a possibly-similar sort of problem: the database size was much larger than the apparent size of the content. That discussion drifted into many side topics, but the gist seems to be that what was being stored was essentially (large) strings/blocks-of-bytes with embedded
NUL characters (
0x00 bytes) in them. The apparent size only showed the length up to the "terminating NUL byte", but the database was storing the full collection of bytes.
I don't think the code that caused the problem in that case was shown, but the following (incomplete and untested) snippet should demonstrate a possible cause:
char bigBuffer; strcpy( bigBuffer, "A short string" ); sqlite3_bind_text( stmt, 1, bigBuffer, sizeof bigBuffer, NULL );
As (I think) I understand it, if the fourth parameter of
sqlite3_bind_text has a positive value, then that many bytes will be stored in the database (10,000 in this case), even if the block of memory given by the third parameter contains a zero-byte. From that page:
If any NUL characters occurs at byte offsets less than the value of the fourth parameter then the resulting string value will contain embedded NULs. The result of expressions involving strings with embedded NULs is undefined.
select * ... after the above would show only the string "A short string", but it would be consuming 10,000 bytes of space within the database. The SQL
length(...) function would – I believe – only show 14, but the C API function
sqlite3_column_bytes should show 10,000 according to a reply Richard made in the aforementioned email chain:
There's no way at all, to know the length of a text column with embedded NULLs?
You can find the true length of a string in bytes from C-code using the sqlite3_column_bytes() interface. But I cannot, off-hand, think of a way to do that from SQL.
It might be worth reviewing your code to see if anything somewhat-similar might be occurring.
(8) By anonymous on 2020-07-30 10:17:36 in reply to 7 [link] [source]
Cast to blob, then length returns length of blob SQLite version 3.25.1 2018-09-18 20:20:44 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> create table t( data text ); sqlite> insert into t values( '12345' || X'000102030405060708' ); sqlite> select data, typeof( data ), length( data ), length( cast( data as blob ) ) from t; 12345|text|5|14
(9) By Goravani (DasGoravani) on 2020-07-30 21:37:51 in reply to 1 [link] [source]
I have discovered by trial and error that the table is corrupted. I can't insert to it, nor update, nor delete. My edits to the table are done as a delete then an insert.. the deletes are not happening, and the inserts are inserting ghost records that don't respond to selects and have no data in them but they do have size.. There were 500 records according to the analyzer but only 3 that could be selected and viewed, that had data in them. I am working within Omnis, the language, that most have never heard of, it does the SQL for you, it calls SQLite's API's for me.. I am shielded from real SQL and all.. so I can't be certain where any fault lay... Just thought I'd log on and try to wrap this thread up. Everything says my table is corrupted.. logic says that 350 columns with 40 of them being pictures and another 40 are lists.. logic says it's too big, so I am today breaking out the pictures at least to their own table.. then remake this table fresh and hope that without the pictures it can function correctly.. If that isn't enough then I'll do the lists next.. break them out into their own table.. do this breakup of the central record until it and other new tables all behave correctly I really appreciate this forum, it is invaluable.. SQLite is awesome, and this forum rocks it. Peace out, Das Goravani
(10) By Ryan Smith (cuz) on 2020-07-30 22:17:00 in reply to 9 [source]
I'm glad you have figured out WHAT went wrong, I'm worried that you have not discovered WHY it went wrong.
I can't speak for Omnis and what it might do in the background, shielding you from SQL and all, but what I can say is if this was normal SQLite running in a normal system doing its thing in normal modes, then sure those column counts and content are a bit big and lofty, but even so it should absolutely work and it should absolutely NOT get corrupted, and if it does become corrupt by simple virtue of being lofty, it's a bug.
So I don't think all the steps you are intending will solve the problem, though they may well be sensible improvements for other reasons, but they should not be blamed for what went wrong.
Have you seen this page? Maybe something in there might help you look in the right place.
Best of luck!