what are the recommended storage limits per sqlite? is it possible to have 200TB in SQLite and still have good performance in
(1) By anonymous on 2022-08-14 17:19:58 [link] [source]
what are the recommended storage limits per sqlite? is it possible to have 200TB in SQLite and still have good performance in queries?
So...
- I'm creating a solution that works only on desktop and is offline, in this sense, I want to know what is the advisable size limit of the SQLite database to have a good performance in queries.
- Is there any documentation that talks about this?
(2) By Larry Brasfield (larrybr) on 2022-08-14 18:09:15 in reply to 1 [source]
what are the recommended storage limits per sqlite?
You could consider the default limits to be recommended for general purposes. When those are unsuitable, others would be recommended and perhaps possible.
Is there any documentation that talks about this?
See Limits in SQLite.
(3) By Sunny Saini (SunnySaini_com) on 2022-08-17 14:33:53 in reply to 1 [link] [source]
what are the recommended storage limits per sqlite? is it possible to have 200TB in SQLite and still have good performance in queries?
I think default is 64TB and max 140TB.
(4) By Keith Medcalf (kmedcalf) on 2022-08-17 15:06:31 in reply to 1 [link] [source]
I want to know what is the advisable size limit of the SQLite database to have a good performance in queries.
The size of the database should not be relevant to a properly designed application. If the size of the database causes performance difficulties, then that is because of bad application design, not because of SQLite3.
(5) By Simon Slavin (slavin) on 2022-08-17 16:36:28 in reply to 4 [link] [source]
You will also experience cache-busting, when the amount of data SQLite is asked to access increases past the available cache space.
How badly cache-busting affects you varies dramatically with your hardware setup. I have seen a SQLite-based app slow down by a factor of 5 (back in the days of spinning hard disks).
The user wouldn't have noticed the difference had it happened gradually as database size increased, but because it was a big change that was noticed over 24 hours, they reported it as a bug/fault/failure. Took me a couple of hours to diagnose and confirm the cause of the slow-down.
(6) By Keith Medcalf (kmedcalf) on 2022-08-17 16:59:21 in reply to 5 [link] [source]
By cache-busting do you mean thashing?
Thrashing occurs when the size of the cache (RAM) is so deleteriously insufficient that pages which are discarded from the cache are immediately re-read from the underlying device.
The most common cause of cache-thrashing is that the cache is permitted to exist in V:V space when it should be confined to only V:R space, and the Operating System is paging the cache (which is really bad design).
The next most common cause is a badly designed application (that is, one which has been designed to handle a mere fingerful of data, but is loaded with a couple of handfuls of data). This is an extremely common defect.
I have no idea whether or not SQLite3's page cache is properly tagged as non-discardable and non-swappable.
(8) By Simon Slavin (slavin) on 2022-08-19 13:11:33 in reply to 6 [link] [source]
Cache-busting is what happens exactly at the point where caching no longer gives a massive speed boost. Before the cache is busted, the system is fast. As the cache is busted, there's a sudden, noticeable, drop in speed, which continues until something (restarting the program ? deleting old data ?) clears the problem.
This can be because thrashing starts. Or the whole database no longer fits in the cache. Or a frequently-used index no longer fits in the cache. Or a page index no longer fits in the cache. Or a journal file no longer fits in the cache. Or the system as a whole involves multiple levels of cache (e.g. L1 L2 L3) and some weird interaction between them starts at a certain point.
(9) By Sunny Saini (SunnySaini_com) on 2022-08-20 04:08:47 in reply to 6 [link] [source]
For small to medium sized database, are all indices kept loaded in RAM throughout the database connection?
(10) By anonymous on 2022-08-20 18:47:58 in reply to 9 [link] [source]
That depends on the cache size and the access pattern, an index that is never accessed has no reason to be loaded in the cache
(13) By Sunny Saini (SunnySaini_com) on 2022-08-25 03:33:30 in reply to 10 [link] [source]
OK, thanks.
(11) By Keith Medcalf (kmedcalf) on 2022-08-20 19:14:26 in reply to 9 [link] [source]
small to medium sized database is in the eye of the beholder.
I consider a couple of gigabytes "small", all the way up to a couple of terrabytes for "medium".
Some people may consider "small" to mean 5 rows of data, and "medium" to mean 10 rows.
"small" is something that will fit entirely in RAM.
for anything else, only the working set needs to be in RAM.
(14) By Sunny Saini (SunnySaini_com) on 2022-08-25 03:35:11 in reply to 11 [link] [source]
Ha ha ha, you're right.
(12) By Simon Slavin (slavin) on 2022-08-20 20:06:05 in reply to 9 [link] [source]
SQLite does not pre-emptively load anything into RAM.
SQLite maintains cache space. Pages, including indexes, will be cached. You can find out, and change, how much cache space it will use, at maximum, using this:
(15) By Sunny Saini (SunnySaini_com) on 2022-08-25 03:43:53 in reply to 12 [link] [source]
Thank you for notifying this Pragma. Now my doubts are clear.
(7) By Cecil (CecilWesterhof) on 2022-08-18 09:20:00 in reply to 5 [link] [source]
The user wouldn't have noticed the difference had it happened gradually as database size increased
So true. That is why I started the thread 'Delete every record except the first of the month'.
This database was filled for about six years. Not big less as 700 MB, but because it is VERY actively used it became slowly more of a problem. By moving old data to a history database the database became less as 100 MB and the performance improved significantly.
The problem is that every day I do an INTEGRITY_CHECK and a FOREIGN_KEY_CHECK and this could take several minutes. Much to long. Now it takes less as ten seconds.