SQLite Forum

Drop caches & Sqlite cache management
Login

Drop caches & Sqlite cache management

(1) By ayagmur75 on 2020-07-01 07:15:35 [link] [source]

Hello,
I have been encountering a weird situation. I want to get information from you.

At the first case;
I have created a database with tables and indexes. After then I insert huge amount of data into db. Then I execute drop_caches command. After then I execute a query. ->> it's painfully slow.

At the second case;
I have been continuing to use database. Then I shutdown it and then open it. Then I execute drop_caches command. After then I execute the same query. ->> it's quite fast.


What is the difference? Could you please explain?

Thank you,
Best Regards,

(2) By Keith Medcalf (kmedcalf) on 2020-07-01 08:32:18 in reply to 1 [link] [source]

What is the drop_caches command?

(3) By ayagmur75 on 2020-07-01 08:37:02 in reply to 2 [link] [source]

Hello ;

From documentation;

drop_caches
Writing to this will cause the kernel to drop clean caches, as well as
reclaimable slab objects like dentries and inodes.  Once dropped, their
memory becomes free.


"echo 3 > /proc/sys/vm/drop_caches" is stated as can be used to clear caches of sqlite.
https://stackoverflow.com/questions/4544208/sqlite-clear-cache


Aim is not to clear sqlite caches. But a necessity forces us to use drop_caches command. I wonder if why sqlite cache is cleaned in the first case but not at the second one.

Thank you,
Best Regards,

(4) By Keith Medcalf (kmedcalf) on 2020-07-01 09:03:40 in reply to 3 [link] [source]

That clears the Operating System caches, it has nothing to do with SQLite's page cache. SQLite has its own page cache which is controlled by the connection -- but unless you made it "big" then it is quite small by default.

So when a "page" needs to be accessed, first the sqlite3 cache is checked. If the page is already in there, then there is no need to request the page from the Operating System. However if the page is not in sqlite3's page cache, then the read must be passed to the Operation System. If the page is in the Operating System page cache, then there is no need to go all the way to the underlying storage to fetch the page.

So in the first case pages were required that were not in the sqlite page cache, so the request went to the Operating System. It wasn't in the OS cache either so the OS had to fetch the page from the underlying disk.

In the second case the pages required were already in sqlite3's page cache, so it did not need to ask the OS to retrieve the page.

(5) By ayagmur75 on 2020-07-01 10:54:53 in reply to 4 [link] [source]

Ok, I can see it.
Thank you very much.

(6) By ayagmur75 on 2020-07-02 07:07:48 in reply to 4 [link] [source]

Hello,

Can I ask another question related with this context?
Is there any flag or any utility to know when sqlite3's page cache is ready to use? So, When I call drop_caches, there could not be performance penalty.


Thank you,
Best regards

(7) By Kees Nuyt (knu) on 2020-07-02 13:50:34 in reply to 6 [link] [source]

Is there any flag or any utility to know when sqlite3's page cache is ready to use?

The internal sqlite page cache will be automatically be used by the sqlite engine as soon as a database connection is opened.

So, When I call drop_caches, there could not be performance penalty.

The drop_caches call drops the Operating System's file caches, any buffered page from any file that was in the OS page cache will have to be re-read from the physical filesystem as soon as they are required by an program.

The OS does not make any difference between pages from a database or any other cached file.

So you will definitely experience a performance penalty for the system as a whole.

May I ask why you are so eager to drop the caches?

-- 
Regards,
Kees Nuyt

(8) By ayagmur75 on 2020-07-03 09:02:37 in reply to 7 [source]

Thank you very much for detailed explanation.

We just want to know if we can use drop_caches with sqlite without suffering any performace penalties.

Best Regards,

(9) By TripeHound on 2020-07-03 09:39:34 in reply to 8 [link] [source]

As (I think) I understand it, almost certainly not, at least without massively increasing the SQLite cache size.

Many operations in SQLite will need to repeatedly access the same page. If that page is present in SQLite's own cache, it saves a trip to disk. If the page isn't in the internal cache, it asks the OS to supply it. If the page is in the OS's cache, then that also saves a trip to disk. If the page is in neither cache, then it has to be read from the file (comparatively slow).

The default SQLite cache size is 2MB, so if you database is bigger, you are going to have filled SQLite's cache and be relying on the (presumably much larger) OS cache to avoid costly (repeated) disk access.

Therefore, if you drop_caches, and wipe-out the OS cache (at least while it is filled with commonly-accessed pages that won't fit in SQLite's cache) then a slow down seems unavoidable as all those commonly-accessed pages now need to go back to disk and things will necessarily be slow (at least until the OS cache has been repopulated).

If you were to increase SQLite's cache size so that it could hold all (or most) of the most commonly used pages that your use of it needs, then that may mean that running drop_caches has minimal effect, but I've never played with SQLite's cache size, so I don't know if there are downsides to doing this.

(10) By Wout Mertens (wmertens) on 2020-07-03 10:10:44 in reply to 9 [link] [source]

Reading this makes me wonder, since only the OS has all the information regarding the available memory and other instances of sqlite, would it not be best to leave page caching as much as possible to the OS?

Meaning, only cache pages in memory that you access so much that the round trip to the kernel to get the page from it becomes a bottleneck.

In fact, I'd guess that that is already what is happening :)

(11.1) By Keith Medcalf (kmedcalf) on 2020-07-03 14:52:05 edited from 11.0 in reply to 9 [link] [source]

If you were to increase SQLite's cache size so that it could hold all (or most) of the most commonly used pages that your use of it needs, then that may mean that running drop_caches has minimal effect, but I've never played with SQLite's cache size, so I don't know if there are downsides to doing this.

Increasing the cache size has exactly the effect that one would expect. The bigger the cache, the less the I/O (even if that I/O will be satisfied by the system cache, it is still I/O -- just faster I/O), and therefore the faster the go, up to the point at which the overhead to manage the cache exceeds the benefit from having a cache.

Think of it this way. The SQLite3 page cache is like having a file folder on your desk which has a capacity for some number of pages of paper. Your secretary in the outer office has a whole filling cabinet full of papers (this is the OS page cache), and there is a library over the road that has cabinets and cabinets and cabinets of pages (this is the actual file).

When you need to access a page, you first look for it in the pile on your desk. If you find it, you need look no more. This is very fast (lets call it time X). If you don't find it, you ask your secretary to go fetch it for you. She looks though her filing cabinet for the page, and if found, gives it to you. This is pretty fast too, but takes longer than if you had the page on your desk already. (lets call this case time Y). Since you can only store some specific number of pages in your office, you might need to give one back to the secretary if you have too many.

So now you have TWO caches. Accessing the first takes X time where X is less than Y. Right up until you have so much pages in the file folder on your desk that the length of time to find one takes longer than the time Y to just ask your secretary. The "number of pages in your file folder" where this occurs varies for a whole raft of reasons. For example, the size of your office -- perhaps your office is too small to hold much in the file folder (RAM constraint). Perhaps you are extremely arthritic and looking through even a small number of pages takes you a long time (CPU challenged). Or perhaps you have none of these constraints, however, perhaps you are some inefficient indexing mechanism to keep track of your page inventory.

At sime point it is faster for you to not cache more pages in your file folder, and just ask your secretary to fetch them for you.

Now your secretary has the same problem. If she has the page you requested she can give it to immediately after she find it in her filing cabinet. Being inherently more efficient, your secretary can handle a huge number of pages more than you before it starts to take her a long time to find the page. Sometimes, she does not have the page and she has to put on her hat and coat and galoshes (it is raining out you see) and go over the road to find the page in the library. This takes a lot longer (lets call this time Z).

However, the number of pages in your secretaries filing cabinets can be quite huge before the time she takes to find a page (Y) exceeds the time of having to go to the library (Z), so your secretary probably has a huge local cache of pages, however the problem is exactly the same as with the file folder you have on your desk.

So the answer with respect to cache sizes is that bigger is better, right up until bigger is not better. Given unconstrained CPU and RAM I have not seen the sqlite page cache run into management limits -- though I have only sized it up to a few gigabytes. Just remember that the fastest way to do I/O is not to do it.

Tuning of caches, particularly of multi-level caches, is a "Deep Dark Art" that has been studied in computer systems since they were invented, and in filing systems of all types since the dawn of writing stuff down. Of course, the problem of managing the caching of stone tablets are somewhat different than managing bytes in a computer, but mostly only as a matter of scale.

(12) By Kees Nuyt (knu) on 2020-07-03 15:44:50 in reply to 10 [link] [source]

The SQLite page cache has the advantage of "inside information", it can give priority to certain types of pages, like the root page and BTree index pages of any table and index ever used in the connection.

Indeed, even if a page is in the OS cache, the code path to retrieve it is much longer than when it is in the SQLite cache.

My rules of thumb :

  • For small databases I size the cache so the database fits in the cache completely.
  • For bigger databases at least the sum of :
    • the size of sqlite_master
    • twice the sum of the number of index pages of all tables and indeces (can be retrieved with sqlite3_analyzer)
    • a reasonable estimate of the number of leaf pages modified by "typical transactions" (to prevent avoidable cache spills)

I don't consider this the most optimal method, and if the platform allows, I happily categorize a 30 MByte database as "small" and give it a 30 MByte cache.

-- 
Regards,
Kees Nuyt

(13) By Keith Medcalf (kmedcalf) on 2020-07-03 16:39:01 in reply to 12 [link] [source]

Since I build my own, I define the default 32-bit cache size as 256 MB and the default 64-bit cache size to be 1 GB.