Can I detect if the disk supports WAL mode?
(1) By example-user on 2021-09-28 17:17:36 [link] [source]
I have a process that defaults to reading/writing a SQLite DB in WAL mode (for the concurrent readers and improved write performance).
An issue that I have (prev forum posting) is that the process can run in a Docker container or a VM.
I think VM/docker volumes do not work with WAL mode for the same reasons as WAL mode not working on network disks (guess: two OS's do not know about each others locks).
Am I able to detect if the disk directory provided will work flawlessly with WAL mode?
At the moment WAL mode seems to corrupt the database if given a VM/docker volume.
I want to proactively detect if this is the case and exit my process with an error.
two OS's do not know about each others locks
They probably can't do shared memory across the boundary, on purpose, else what's the point of containers? Cross-process memory access is the antithesis of containerization.
In a one-line C call, no.
There are methods for detecting that one is running inside a Docker container, which may suffice, indirectly.
Short of that, the best I've got is to run all of the WAL tests. Find the one(s) that flag the problems, dig into the tests, and find out which failure modes are being triggered. That may allow you to distill the test to something small and cheap enough to run on application init.
It'll be brittle, though, detecting only your current failure case.
(3) By example-user on 2021-09-28 18:52:24 in reply to 2 [link] [source]
Thanks, I will take a look at the tests.
Cross-process memory access is the antithesis of containerisation.
True, but containers kind of work like firewalls where you whitelist any files so the guest can read/write to the host.
The wal-index is implemented using an ordinary file that is mmapped for robustness
So I guess
mmapped files only work on the same OS.
Linux host and Linux guest seems to work (same Linux kernel instance), but a non-Linux host always results in a corrupt DB (Linux kernel runs in a VM).
I guess mmapped files only work on the same OS.
The whole point of
mmap(2) is to map a file into the unified virtual memory space of the machine. QEMU provides a private virtual memory space for its VM, so there is no way for
mmap to work short of distributed memory schemes, which are super-slow and fiddly besides, which is why no current OS bothers to provide such things. It was a thing back in the days of "cluster computing", but we don't architect systems that way any more, because we learned it sucks.
I'd be surprised if you could
mmap between two Docker containers even on the same host, with the same OS, shared local file store, matching CPU type, etc. If it is possible, it should only be so through careful configuration, but I couldn't find any docs online about doing so.
If you have two containers today, can I assume you intend to have many containers later? If so, then one of the distributed SQLite variants may be more what you need. It'll solve this problem in a much better fashion. This is the modern answer to cluster computing.
What is a VM/docker?
They are different things.
A VM (Virtuasl Machine) allows one to run "native code" on emulated or virtualized hardware. When working properly there is NO WAY for the software running on "virtualized hardware" to know that it is not running on "real hardware", and if it is possible to detect the difference, then the VM is not a real VM, it is merely a toy.
Docker, on the other hand, is an APPLICATION PROGRAM. It has nothing whatsoever to do with any "Virtual Machine". As a separate application in its own right, it has its own set of constraints -- one of which is that IT DOES NOT PROVIDE A SUFFICIENTLY WORKING FILESYSTEM FOR CONCURRENT ACCESS.
WHat you have asked does not really make any sense whatsoever.
There is nothing that can be done to make the "docker" application work properly, other than to re-write the docker application so that it works properly. It is a hopeless abortion for use by little kiddies to solve a problem that does not exist.
If you insist on using "Docker" then you must also live with the limitations of it.
As far a Virtual Machine, if you run an Operating System that works properly on BARE NAKED HARDWARE, then it will also work EXACTLY the same on a PROPERLY WORKING Virtual Machine.
No matter what you dop, "Docker" is knows to be broken and NOTHING whatsoever will fix it (running it on a VM, running it on bare metal, running it on Gode's Computer) will not help.
Docker is like a dory (a wee little row boat) that has a hole in the bottom. It matters not whether you put your dory on the Ocean or a Lake, or even in the toilet. Water will still perculate up through the holes and drown you.
(6) By example-user on 2021-09-28 21:04:25 in reply to 5 [source]
A VM/docker are different things, but when you use the Docker CLI you may also be using a VM underneath too.
I agree with you that Docker is just the latest fad, and has lots of issues.
I am using it to easily distribute a CLI:
docker run x will work on Mac, Linux and Windows, each having their own set up of VM's etc that I do not need to configure. I also want to use it as a basic sandbox.
As I understand, when the host is Linux, and the guest is Linux, the process runs as if it just a normal Linux-host process, just with extra sandbox-like protections applied to the process. In the basic cases I tested, WAL mode seems to work fine here (I guess because the mmap memory is managed by the single Linux-host).
When the host is Mac OS or Windows, a VM is used like this (which breaks WAL):