> latency in fetching those pages before they're part of the cache I don’t see how you can share a buffer cache across iSCSI or similar if you want ACID durability and consistency. How would cache invalidation work when another node writes to the DB? The whole idea of a buffer cache is that multiple processes — the kernel and a single user program if nothing else — can share information about the content of disk and memory pages because they're both running on the same hardware. Atop that, you have general I/O latency. If your data is now 10 ms away, it’ll be like you have a pair of rotating rust head seeks on reads. In a single reader world, you get 100 TPS best case. Add the above problem and multiple readers, now you get fewer and fewer TPS. No. Stop. Put the DBMS on the remote system and give it a remote access API, as in [my prior response](6e79ff0e1c). Then you get the benefit from the local buffer cache for reads and local file system semantics. (Writes on rotating rust are worse, by the way: SQLite needs 2 rotations per journaled write, so 7200 RPM means 60 TPS best case. The incentive is to put SQLite DBs onto SSDs so the I/O latency drops by a few orders of magnitude, raising write TPS. But, this is orthogonal to this subthread's topic, since fast SSD writes 10 ms away are still 10 ms away!) Better, use the distributed DB tech I've linked to, which allow tuning so that reads are purely local, and only writes must hit the network. This sacrifices consistency, but in a safer manner than trying to do read caching over iSCSI, because it allows for eventual-consistency, which you're unlikely to get with remote FS cache invalidation.