SQLite Forum

SQLite with NFS + cachefilesd
Login

SQLite with NFS + cachefilesd

(1) By anonymous on 2021-08-06 19:18:53 [link] [source]

I know, I know! It is not recommended to use SQLite over NFS.

I still want to do it though, and I'm trying to improve performance by enabling NFS caching with cachefilesd.

That works well for reads on regular files located on the NFS share. However it doesn't seem to improve SQLite queries performance at all.

Does anyone has any idea why?

I've read stuff about SQLite being seek intensive, about O_DIRECT access, but know pretty much nothing about all that.

Even better, if anyone has a solution to have SQLite work with cachefilesd, I'm interested to hear it!

(2) By Keith Medcalf (kmedcalf) on 2021-08-06 21:08:45 in reply to 1 [link] [source]

Nobody here (I don't think) wrote either NFS or this 'cachefilesd' whatever that is, so no one here will be able to help you other than to tell you what you appear to already know: SQLite3 should not be used over Remote (network) filesystems because doing so may (1) be slow and (2) not work correctly -- rather the filesystem should be local to the machine running SQLite3. (Note that this does not mean that the storage needs to be local, only the filesystem. You can use a local ext3 filesystem over an iSCSI connection where the "block storage device" is located the next quasar over, if you like, you just cannot have the whole ext3 filesystem located the next quasar over and access it via a network file system.)

So the question is not so much as why it does not work (you have been told that based on retrospective evidence it likely will not) but rather why you persist in pursuing it anyway when you have been told that in all likelihood it will not work.

(3) By Keith Medcalf (kmedcalf) on 2021-08-06 21:09:44 in reply to 2 [link] [source]

Note also that TOG has the same limitation.

(5) By anonymous on 2021-08-07 11:55:40 in reply to 2 [link] [source]

Thank you for your answers. It's the first time I come to this forum and I'm impressed by the speed and quality of the answers.

this 'cachefilesd' whatever that is

From what I understand it's a file system caching program. When used with NFS it can improve read performance.

Note that this does not mean that the storage needs to be local, only the filesystem. You can use a local ext3 filesystem over an iSCSI connection where the "block storage device" is located the next quasar over

Thanks for the insight, I'll have a look at this.

Does chachefilesd improve the odds that all readers and writers of the database "file" see/create a consistent view of it? Or is it a speed-up fixer-upper?

I'm no expert with cachefilesd but my understanding is that it's a speed-up fixer, as well as alleviating network loads. Also, I wasn't careful in my sentence, I meant SQLite read queries performance is not improved, which is what surprises me. It makes sense that write queries are not improved (cachefilesd invalidates the cached file for writes, apparently).

Perhaps cache-improver is taking care to be sure that the underlying actual disk file on a remote machine has not changed underneath the cached copies. There is already a lot of caching and syncing effort associated with network file-emulation layers.

I'm not sure I got this correctly, but I would still expect at least some improvement on read queries. When reading or copying files from the NFS cachefilesd does improve performance (a lot). I thought perhaps the difference with SQLite is the way the file is read/fetched/searched.

I suppose that means you have read and understood SQLite Over a Network, Caveats and Considerations and See How To Corrupt Your Database Files. Do you have any suggestions as to how such dissuasion might be made more effective?

I have read those, and they're very informative. I also read other posts in multiple places discouraging this approach. However NFSv4 and v4.1 now apparently support file locking and that makes it very tempting to go for SQLite over NFS with these versions, at least for me. The only suggestion I'd have would be to add some information about SQLite over NFSv4+ in case you've got some feedback/testing results with that.

I'm trying to have a dead simple highly available system where multiple hosts use the shared SQLite DB over NFS. I realize SQLite wasn't designed for that usage, and this may never be feasible in any safe or performant way. But still I want to try it and see for myself. I also realize that using NFS caching on each host independently would likely pose some coherence issues. I was still curious about it.

The right way would be to use a client/server SQL like MySQL or pgSQL I guess, but that would add a lot of overhead to keep the system highly available, because then I need to have multiple SQL servers in sync. I want no single point failure. My NFS is actually AWS EFS, which has three redundant endpoints. Funnily enough, AWS mentions the ability to use SQLite over EFS since EFS is compatible with NFSv4+: https://aws.amazon.com/about-aws/whats-new/2017/03/amazon-elastic-file-system-amazon-efs-now-supports-nfsv4-lock-upgrading-and-downgrading/.

In any case, I appreciate the help here. I'll have a look at a share block storage as an alternative, see if that performs better.

(4) By Larry Brasfield (larrybr) on 2021-08-06 22:43:16 in reply to 1 [link] [source]

(Under-scoring Keith's point here:)

So, despite the clear and experience-based reasons that separation of the running SQLite library from its persistent storage by the interposition of network file-emulation layers is specifically recommended to be avoided, you want to use a particular variant of one of those layers which is purported to improve performance in some way?

Does chachefilesd improve the odds that all readers and writers of the database "file" see/create a consistent view of it? Or is it a speed-up fixer-upper?

[ cachefilesd does not seem to speed SQLite queries ] Does anyone has any idea why?

Perhaps cache-improver is taking care to be sure that the underlying actual disk file on a remote machine has not changed underneath the cached copies. There is already a lot of caching and syncing effort associated with network file-emulation layers.

[quoted out-of-order]

I know, I know! It is not recommended to use SQLite over NFS.

I suppose that means you have read and understood SQLite Over a Network, Caveats and Considerations and See How To Corrupt Your Database Files. Do you have any suggestions as to how such dissuasion might be made more effective?

(6) By anonymous on 2021-08-07 11:57:31 in reply to 4 [link] [source]

Does chachefilesd improve the odds that all readers and writers of the database "file" see/create a consistent view of it? Or is it a speed-up fixer-upper?

I'm no expert with cachefilesd but my understanding is that it's a speed-up fixer, as well as alleviating network loads. Also, I wasn't careful in my sentence, I meant SQLite read queries performance is not improved, which is what surprises me. It makes sense that write queries are not improved (cachefilesd invalidates the cached file for writes, apparently).

Perhaps cache-improver is taking care to be sure that the underlying actual disk file on a remote machine has not changed underneath the cached copies. There is already a lot of caching and syncing effort associated with network file-emulation layers.

I'm not sure I got this correctly, but I would still expect at least some improvement on read queries. When reading or copying files from the NFS cachefilesd does improve performance (a lot). I thought perhaps the difference with SQLite is the way the file is read/fetched/searched.

I suppose that means you have read and understood SQLite Over a Network, Caveats and Considerations and See How To Corrupt Your Database Files. Do you have any suggestions as to how such dissuasion might be made more effective?

I have read those, and they're very informative. I also read other posts in multiple places discouraging this approach. However NFSv4 and v4.1 now apparently support file locking and that makes it very tempting to go for SQLite over NFS with these versions, at least for me. The only suggestion I'd have would be to add some information about SQLite over NFSv4+ in case you've got some feedback/testing results with that.

I'm trying to have a dead simple highly available system where multiple hosts use the shared SQLite DB over NFS. I realize SQLite wasn't designed for that usage, and this may never be feasible in any safe or performant way. But still I want to try it and see for myself. I also realize that using NFS caching on each host independently would likely pose some coherence issues. I was still curious about it.

The right way would be to use a client/server SQL like MySQL or pgSQL I guess, but that would add a lot of overhead to keep the system highly available, because then I need to have multiple SQL servers in sync. I want no single point failure. My NFS is actually AWS EFS, which has three redundant endpoints. Funnily enough, AWS mentions the ability to use SQLite over EFS since EFS is compatible with NFSv4+: https://aws.amazon.com/about-aws/whats-new/2017/03/amazon-elastic-file-system-amazon-efs-now-supports-nfsv4-lock-upgrading-and-downgrading/.

In any case, I appreciate the help here. I'll have a look at a share block storage as an alternative, see if that performs better.

(7) By anonymous on 2021-08-07 14:45:00 in reply to 6 [link] [source]

If you are ok with only one host accessing the DB at a time then a block storage attached over the network might be a viable solution, e.g., in an AWS context, one the host instance dies, all you need to do is to detach the block storage and attach it to another host. This will lead to minimal downtime, but it might a pretty small window it won't matter, and your load balancer might even make it mostly transparent to the clients

(8) By anonymous on 2021-08-07 15:15:21 in reply to 7 [link] [source]

Thanks for the input! I really need no downtime at all though.

I was thinking about trying to mount the block storage to multiple hosts simultaneously. AWS EBS has a multi-attach feature, with some limitations. A big one is that standard file systems don't support concurrent access, so I'd need to use GFS2 or something like that. Starts being a bit less simple than I'd wish, but I may give it a try.

(10) By RandomCoder on 2021-08-07 18:54:39 in reply to 8 [link] [source]

EBS's multi-attach doesn't support I/O fencing, meaning I'd expect putting an application that isn't aware of what's happening with such a volume on it to encounter issues. (And given my luck, I'd expect those issues to first happen a month after I've gone live during my first big burst of business)

It also doesn't help you if there's an issue with the volume.

If you really need no downtime, is there some reason you're not using RDS or some other service that has failover support, mutli-AZ support, and is just generally meant to be used in a scenario like this?

(12) By anonymous on 2021-08-20 14:44:29 in reply to 10 [link] [source]

I may be able to use multiattach with a cluster FS, though that complicates the setup a little bit.

Thanks for the RDS suggestion. The main reason for trying to avoid it is laziness honestly, and cost. I need to share files other files across all servers, and since I already have SQLite databases, I was trying to kill two birds with one stone, and keep things simple. But RDS may is probably a better way to go in principle. For now I also want to avoid its cost.

(11.1) By Scott Robison (casaderobison) on 2021-08-07 20:01:52 edited from 11.0 in reply to 6 [source]

From what I understand after searching for cachefilesd, it caches reads only, and I believe it is caching entire files. There is a lot of overhead in opening/reading/closing many small files, so a cache is great at helping speed up those sorts of patterns by storing the file in the local cache. If it is checking for files to be modified, I fear that it will wind up copying the entire database file back down every time it changes (which will be frequently, I suspect).

Alternatively, the problem could be how cachefiled caches randomly accessed vs sequentially accessed files.

As for NFSv4+, SQLite can't really document that more than it has already, as the problem with remote file systems haven't necessarily been the specification of the file system, it has been the implementation of the file system. If anyone deviates from the published interfaces, then the documentation doesn't apply.

I think your desire to avoid having to set up and maintain an actual database server is understandable, but by trying to use SQLite over NFS, you are simply trading one headache (having to maintain the DB server) for another (whether or not NFS whatever version and implementation honors all file system semantics so as to be indistinguishable from a local file system).

The benefit to the database server is "it will either work or it won't" ... if the server is accessible, it will work (assuming the software itself is defect free and configured properly). Otherwise your application will know "I cannot communicate with the DB server, something is wrong, sound the alarm."

Compare that to the alternative of NFS maybe working but maybe not. SQLite asked for data. NFS provided it. SQLite attempted to lock. NFS said it did. SQLite feels it is safe to commit pages to the database. But NFS lied, so now the database is corrupt in some way that might or might not manifest itself at some future point in time.

(13) By anonymous on 2021-08-20 14:57:02 in reply to 11.1 [link] [source]

Thanks for the insight Scott!

Just as a final (for now) update, I went on trying to use a cluster FS on a multiattach EBS (i.e. shared block storage).

I tried both GFS2 and OCFS2. I couldn't make it work at all with GFS2. OCFS2 somewhat worked (and was so much simpler to set up), but I still had some cluster issues with it, with servers starting hanging from time to time.

I'm giving it up for now, I'll get back at it later.

Migrating my SQLite databases to an SQL server on AWS RDS may solve the SQL issue. However I also need to share non SQL(Lite) files between servers. I could use RDS for the SQL part, and EFS for other files. But even for other files it seems performance would be a lot better with a shared block storage than with EFS. If I can succeed at setting up a stable shared block storage + cluster FS at some point, I'll test just putting my SQLite databases for a while there, see how that works out.

Thanks again for all the help and suggestions folks, I learned stuff and at least I've got more options now!

(9) By anonymous on 2021-08-07 17:20:37 in reply to 4 [link] [source]

I suppose that means you have read and understood SQLite Over a Network, Caveats and Considerations

In addition to the three choices listed there, I can suggest a few more possibilities too (although I would expect that these will be more complicated to implement than the three listed in the linked document):

  1. Use a virtual table to access remote data. However, some features will not work with virtual tables.

  2. Define a VFS which handles the locking properly on the network file system.

  3. Define a protocol for database locking properly on the server and implement that, and then implement the VFS which uses that protocol to access the data.

(14) By anonymous on 2021-08-20 14:58:48 in reply to 9 [link] [source]

Thanks for the ideas! Those may be a bit too complex for my case though.