- Achtung: Restrictions in Incognito and Guest Browsing Modes
- Key-Value VFS (kvvfs): localStorage and sessionStorage
- Origin-Private FileSystem (OPFS)
This API provides database persistence via localStorage/sessionStorage and, in compatible browsers, the Origin-Private FileSystem.
⚠️Achtung: Restrictions in Incognito and Guest Browsing Modes
Most browsers offer "incognito" and/or "guest" browsing modes which intentionally change or disable certain capabilities of the browser. When running in such a mode, storage capabilities might be adversely affected, e.g. with lower quotas or a complete lack of persistence. The exact limits imposed vary per browser, but it is not entirely unexpected that the persistence features described on this page will, when run in such a "stealth" mode, either be more limited than the documentation suggests, or may even be completely unavailable.
"How do we detect these cases in advance?" is a fair question, but browser makers intentionally make it difficult to detect such modes in order to prevent, e.g., sites from restricting access to incognito-mode users. Any current manner of detecting this in any given browser may quickly become obsolete as the browser makers catch on and change things to make such modes more opaque to visited sites, so we cannot offer any advice on how to circumvent them.
Key-Value VFS (kvvfs): localStorage
and sessionStorage
kvvfs
is an sqlite3_vfs implementation conceived and created to
store a whole sqlite3 database in the localStorage
or
sessionStorage
objects. Those objects are only available in the main
UI thread, not Worker threads, so this feature is only available in
the main thread. kvvfs
stores each page of the database into a
separate entry of the storage object, encoding each database page into
an ASCII form so that it's JS-friendly.
This VFS supports only a single database per storage object. That
is, there can be, at most, one localStorage
database and one
sessionStorage
database.
To use it, pass the VFS name "kvvfs" to any database-opening routine
which accepts a VFS name. The file name of the db must be either
local
or session
, or their aliases :localStorage:
and
:sessionStorage:
. Any other names will cause opening of the db to
fail. When using URI-style names, use one of:
file:local?vfs=kvvs
file:session?vfs=kvvs
When loaded in the main UI thread, the following utility methods are
added to the sqlite3.capi
namespace:
sqlite3_js_kvvfs_size(which='')
returns an estimate of how many bytes of storage are used by kvvfs.sqlite3_js_kvvfs_clear(which='')
clears all kvvfs-owned state and returns the number of records it deleted (one record per database page).
In both cases, the argument may be one of ("local"
, "session"
,
""
). In the first two cases, only localStorage
resp. sessionStorage
are acted upon and in the latter case both are
acted upon.
Storage limits are small: typically 5MB, noting that JS uses a two-byte character encoding so the effective storage space is less than that. The encoding of the database into a format JS can make use of is slow and consumes a good deal of space, so these storage options are not recommended for any "serious work." Rather, they were added primarily so that clients who do not have OPFS support can have at least some form of persistence.
When the storage is full, database operations which modify the db will fail. Because of the inherent inefficiency of storing a database in persistent JS objects, which requires encoding them in text form, databases in kvvfs are larger than their on-disk counterparts and considerably slower (computationally speaking, though the perceived performance may be fast enough for many clients).
JsStorageDb: kvvfs the Easy Way
Using the kvvfs is much simpler with the OO1 API. See the JsStorageDb class for details.
Importing Databases into kvvfs
The most straighforward way to import an existing database into the kvvfs
is using VACUUM INTO
from a separate database. For example:
let db = new sqlite3.oo1.DB();
db.exec("create table t(a); insert into t values(1),(2),(3)");
db.exec("VACUUM INTO 'file:local?vfs=kvvfs'");
// Will fail if there's already a localStorage kvvfs:
// sqlite3.js:14022 sqlite3_step() rc= 1 SQLITE_ERROR SQL = VACUUM INTO 'file:local?vfs=kvvfs'
// But we can fix that by clearing the storage:
sqlite3.capi.sqlite3_js_kvvfs_clear('local');
// Then:
db.exec("VACUUM INTO 'file:local?vfs=kvvfs'");
db.close();
let ldb = new sqlite3.oo1.JsStorageDb('local');
ldb.selectValues('select a from t order by a'); // ==> [1,2,3]
The Origin-Private FileSystem (OPFS)
The Origin-Private FileSystem, OPFS, is an API providing browser-side persistent storage which, not coincidentally, sqlite3 can use for storing databases1.
OPFS is only available in Worker-thread contexts, not the main UI thread.
As of July 2023 the following browsers are known to have the necessary APIs:
- Chromium-derived browsers released since approximately mid-2022. As of v108 (November 2022) some OPFS APIs changed from asynchronous to synchronous, which affects how client code (i.e. this library) has to deal with them.
- Firefox v111 (March 2023) and later
- Safari 16.4 (March 2023) and later
This library offers multiple solutions for storing databases in OPFS, each with distinct trade-offs.
OPFS via sqlite3_vfs
This support is only available when sqlite3.js
is loaded from a
Worker thread, whether it's loaded in its own dedicated worker or in a
worker together with client code. This OPFS wrapper implements an
sqlite3_vfs wrapper entirely in JavaScript.
This feature is activated automatically if the browser appears to have the necessary APIs to support it. It can be tested for in JS code using one of:
if(sqlite3.capi.sqlite3_vfs_find("opfs")){ ... OPFS VFS is available ... }
// Alternately:
if(sqlite3.oo1.OpfsDb){ ... OPFS VFS is available ... }
If it is available, the VFS named "opfs" can be used with any sqlite3
APIs which accept a VFS name, such as sqlite3_vfs_find()
,
sqlite3_db_open_v2()
, and the sqlite3.oo1.DB
constructor,
noting that OpfsDb
is a convenience subclass
of oo1.DB
which automatically uses this VFS. For URI-style
names, use file:my.db?vfs=opfs
.
⚠️Achtung: Safari versions < 17
Safari versions less than version 17 are incompatible with the current OPFS VFS implementation because of a bug in the browser's storage handling from sub-workers for which there is no workaround. Both the SharedAccessHandle pool VFS and the WASMFS support offer alternatives which should work with Safari versions 16.4 or higher.
⚠️Achtung: COOP and COEP HTTP Headers
In order to offer some level of transparent concurrent-db-access
support, JavaScript's SharedArrayBuffer
type is required for the
OPFS VFS, and that class is only available if the web server includes
the so-called COOP and COEP response headers when delivering
scripts:
Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin
Without these headers, the SharedArrayBuffer
will not be available,
so the OPFS VFS will not load. That class is required in order to
coordinate communication between the synchronous and asynchronous
parts of the sqlite3_vfs OPFS proxy.
The COEP header may also have a value of credentialless
, but whether
or not that will work in the context of any given application depends
on how it uses other remote assets.
How to emit those headers depends on the underlying web server.
Apache Web Server
For an Apache web server, these headers are set using something like the following:
<Location "/">
Header always append Cross-Origin-Embedder-Policy "require-corp"
Header always append Cross-Origin-Opener-Policy: "same-origin"
AddOutputFilterByType DEFLATE application/wasm
</Location>
Althttpd Web Server
For the althttpd web server, start it
with the --enable-sab
flag ("sab" being short for SharedArrayBuffer).
Cloudflare Pages
See https://developers.cloudflare.com/pages/configuration/headers.
Other Web Servers
If you know how to set the COOP/COEP headers in other web servers, please let us know on the SQLite forum and we will update these docs to include that information.
Directory Parts in DB Names
Unlike most sqlite3_vfs implementations, this one will automatically create any leading directory parts of a database file name if the database is opened with the "create" flag. This divergence from common conventions was made in the interest of...
- Making life easier for web developers.
- Avoiding having to expose OPFS-specific APIs for creating directories to client code. Ideally, client DB-related code should be agnostic of the storage being used.
For example:
const db = new sqlite3.oo1.OpfsDb('/path/to/my.db','c');
will, if needed, create the /path/to
directories. Paths without a
leading slash are functionally equivalent, starting at the OPFS root.
Importing Databases into OPFS
See the OpfsDb docs.
Concurrency and File Locking
⚠️Forewarning: concurrent access to OPFS-hosted files is a pain point for this VFS. Client applications will not be able to achieve desktop-app-grade concurrency in this environment, but a moderate degree of concurrency across browser tabs and/or Workers is possible.
Background: OPFS offers a handful of synchronous APIs which are required by this API. A file can be opened in asynchronous mode without any sort of locking, but acquiring access to the synchronous APIs requires what OPFS calls a "sync access handle," which exclusively locks the file. So long as an OPFS file is locked, it may not be opened by any other service running in that same HTTP origin. e.g. code running in one browser tab cannot access that file so long as it is locked by another tab running from that same origin.
In essence, that means that no two database handles can have the same OPFS-hosted database open at one time. If the same page is opened in two tabs, the second tab will hit a locking error the moment it tries to open the same OPFS-hosted database!
To help alleviate contention between instances of sqlite3 running in multiple tabs or worker threads, sqlite3 only acquires a write-mode handle when the database API requires a lock. If it cannot acquire a lock, it will wait a brief period and try again, repeating this several times before giving up. Failure to obtain a lock will bubble up through to client-level code in the form of a generic I/O error. The OPFS API does not have a way to unambiguously distinguish locking-related errors from other I/O errors, so an I/O error is what the client will see.
The reliable limit of connections on a given OPFS-hosted database is unknown and depends largely on how the environment and how the database is used. Basic tests suggests that 3 connections can, if they limit their work to small chunks, reliably cooperate. The chance of locking failure increases quickly for each connection beyond that. However, that value is highly environment-dependent. For example, Chrome versions 116 and higher have been seen to run 5 connections reliably on a relatively fast machine (3GHz+).
Here are some hints to help improve OPFS concurrency, in particular in the face of a client opening an application via multiple tabs.
- Do not open a database until it's known to be required by the application.
- Never use two database handles to the same db file within the same thread, as that can hypothetically lead to certain deadlock situations.
- Always perform work in "small" chunks, where "small" is measured in milliseconds of I/O, as opposed to data sizes. The less time the database spends on I/O, the lower the chance of contention.
- Do not hold transactions open for any significant length of time. An open transaction necessarily locks the OPFS file.
- See the "opfs-unlock-asap" flag, described in the next section.
Work to improve concurrency support on OPFS-hosted databases is an ongoing process. As OPFS's locking support evolves, and more fine-grained control of locking becomes widely available, the sqlite3 VFS will take advantage of it to help improve concurrency.
Misc. OPFS VFS Features
Unlock-ASAP Mode
Sometimes sqlite3 will call into a VFS without explicitly acquiring a lock on the storage in advance (on journal files, for example). When it does so, an operation which requires a sync access handle necessarily acquires the lock itself2 and holds on to it until the VFS is idle for some unspecified brief period (less than half a second), at which point all implicitly-acquired locks are relinquished.
Such locks are internally called implicit locks or auto-locks. They are locks which are not required by the sqlite3 VFS but are required by OPFS. Normally, an operation which acquires the lock does not automatically relinquish the lock at the end of the operation because doing so imposes a tremendous performance hit (up to 400% run time increase in I/O-heavy benchmarks). However, concurrency can be improved considerably by telling the VFS to immediately relinquish such locks at the earliest opportunity. This is colloquially known as "unlock-asap" mode, and it's disabled by default because of the performance penalty but can be enabled by clients on a per-db-connection basis using URI-style database names:
file:db.file?vfs=opfs&opfs-unlock-asap=1
That string can be supplied to any APIs which permit URI-style filenames. For example:
new sqlite3.oo1.OpfsDb('file:db.file?opfs-unlock-asap=1');
That flag should only be used if an application specifically has
concurrency-related issues. If all else fails, opfs-unlock-asap=1
might help, but whether or not it genuinely will depends largely on
how the database is being used. e.g. long-running transactions will
lock it regardless of whether the opfs-unlock-asap
option is used.
OPFS Delete Before Open
As of 3.46, the "opfs" VFS supports a URI flag of
delete-before-open=1
to tell the VFS to unconditionally delete the
db file before attempting to open it. This can be used, for example,
to ensure a clean state or to recover from a corrupted database
without having to reach into OPFS-specific JS APIs to eliminate it.
Failure to delete the file is ignored but may lead to downstream errors. Deletion can fail if, e.g., another tab has the handle open.
It goes without saying that deleting a file out from under another instance results in Undefined Behavior.
Example:
const db = new sqlite3.oo1.OpfsDb("file:foo.db?delete-before-open=1");
OPFS SyncAccessHandle Pool VFS
(Added in SQLite v3.43.)
The "opfs-sahpool"
("sah" = SyncAccessHandle) VFS is a OPFS-based
sqlite3_vfs implementation which takes a much different strategy
than the "opfs" VFS. The differences can be summarized
as...
Advantages:
- Should work on all major browsers released since March 2023.
- Does not require COOP/COEP HTTP headers (and associated restrictions).
- Highest OPFS performance of the options described in this documentation.
Disadvantages:
- Does not support multiple simultaneous connections.
- No filesystem transparency, i.e. names clients assign their databases are different than the names this VFS stores them under and the VFS manages a sort of virtual filesystem.
Note that the "opfs"
VFS and this VFS may be used in
the same application but they will refer to different OPFS-level files,
even if they use the same client-level file names, because this VFS does
not map client-provided names directly to OPFS files, and instead
maintains those names in its own metadata.
Peculiarities of this VFS:
Paths given to it must be absolute. Relative paths will not be properly recognized. This is arguably a bug but correcting it requires some hoop-jumping in routines which have no business doing such tricks.
It is possible to install multiple instances under different names, each sandboxed from one another inside their own private directory. This feature exists primarily as a way for disparate applications within a given HTTP origin to use this VFS without introducing locking issues between them.
This VFS is based on the work of Roy Hashimoto, with his blessing, specifically:
- github:/rhashimoto/wa-sqlite/discussions/67
- github:/rhashimoto/wa-sqlite/blob/master/src/examples/AccessHandlePoolVFS.js
Installation
Because this VFS does not support concurrency, initializing it twice, e.g. via two tabs to the same origin, will fail for the second and subsequent instances. In order to enable that an origin can use this VFS on only select pages without them being locked via other pages which may be open, the VFS must be explicitly enabled via application-level code. At its simplest, that looks like:
await sqlite3.installOpfsSAHPoolVfs();
or:
sqlite3.installOpfsSAHPoolVfs().then((poolUtil)=>{
// poolUtil contains utilities for managing the pool, described below.
// VFS "opfs-sahpool" is now available, and poolUtil.OpfsSAHPoolDb
// is a subclass of sqlite3.oo1.DB to simplify usage with
// the oo1 API.
}).catch(...);
Installation will fail if:
- The VFS is already active in another browsing context in the same HTTP origin with the same directory name (see the below).
- The proper OPFS APIs are not detected. Note that they are only available in Worker threads, not the main UI thread.
installOpfsSAHPoolVfs()
accepts a configuration object with any
of the following options:
clearOnInit
: (default=false) If truthy, contents and filename mapping are removed from each SAH as it is acquired during initalization of the VFS, leaving the VFS's storage in a pristine state. Use this only for databases which need not survive a page reload.initialCapacity
: (default=6) Specifies the default capacity of the VFS, i.e. the number of files it may contain. This should not be set unduly high because the VFS has to open (and keep open) a file for each entry in the pool. This setting only has an effect when the pool is initially empty. It does not have any effect if a pool already exists. Note that this number needs to be at least twice the number of expected database files (to account for journal files) and may need to be even higher than three times the number of databases plus one, depending on the value of theTEMP_STORE
pragma and how the databases are used. The library cannot guestimate an ideal value - it must be provided by the client.directory
: (default="."+options.name
) Specifies the OPFS directory name in which to store metadata for the VFS. Only one instance of this VFS can use the same directory concurrently. Using a different directory name for each application enables different instances of this VFS to co-exist in the same HTTP origin, but their data are invisible to each other. Changing this name will effectively orphan any databases stored under previous names. This option may contain multiple path elements, e.g. "/foo/bar/baz", and they are created automatically. In practice there should be no driving need to change this.
ACHTUNG: all files in this directory are assumed to be managed by the VFS. Do not place other files in this directory, as they may be deleted or otherwise modified by the VFS.name
: (default="opfs-sahpool"
) sets the name to register this VFS under. Normally this should not be changed, but it is possible to register this VFS under multiple names so long as each has its own separate directory to work from. The storage for each is invisible to all others. The name must be a string compatible withsqlite3_vfs_register()
and friends and suitable for use in URI-style database file names.
ACHTUNG: if a customname
is provided, a customdirectory
must also be provided if any other instance is registered with the default directory. No two instances may use the same directory. If no directory is explicitly provided then a directory name is synthesized from thename
option.forceReinitIfPreviouslyFailed
: (default=false
, available as of 3.47) is an opt-in workaround for a particular browser quirk which can cause initialization of this VFS to fail on its first attempt but to succeed if a second attempt is tried a short time later (see discussion in this ticket).
Forewarning: this flag should truly never be used, as an environment which requires this workaround is inherently suspect for purposes of this VFS, but it is provided for developers who wish to throw caution to the wind and hope for the best.
What it does: when this VFS initializes, the result is cached (whether success or failure) so that future calls toinstallOpfsSAHPoolVfs()
can return consistent results, as described in the next section. This flag will override a cached failure result and instead attempt to initialize the VFS a second time. In environments affected by the motivating ticket, that second attempt may well work. The library does not automatically retry in such cases for reasons explained in that ticket's discussion thread.
The resolved value of the Promise returned by
installOpfsSAHPoolVfs()
, abstractly referred to as PoolUtil
below
(though the object has no inherent name and a reference, if needed,
must be held and named by the client), is described in the next
section.
The asynchronous (necessarily so) installation routine will, on
success, register the VFS with the name specified in the options
object. The VFS's presence can be detected using
sqlite3_vfs_find(options.name)
. PoolUtil.OpfsSAHPoolDb
is a
sqlite3.oo1.DB class subclass which uses this VFS:
const db = new PoolUtil.OpfsSAHPoolDb('/filename');
// ^^^ note that all paths for this VFS must be absolute!
Pool Management
installOpfsSAHPoolVfs()
returns a Promise which resolves, on
success, to a utility object which can be used to perform basic
administration of the file pool (colloquially known as
PoolUtil
). Calling installOpfsSAHPoolVfs()
more than once will
resolve to the same value on second and subsequent invocations so long
as the same name
option is used in each invocation. Calling it with
a different name will return different Promises resolving to different
objects with different VFS registrations.
Its API includes, in alphabetical order...
[async] number addCapacity(n)
Addsn
entries to the current pool. This change is persistent across sessions so should not be called automatically at each app startup (but seereserveMinimumCapacity()
). Its returned Promise resolves to the new capacity. Because this operation is necessarily asynchronous, the C-level VFS API cannot call this on its own as needed.OpfsSAHPoolDb
Is a subclass of sqlite3.oo1.DB configured to use this VFS.byteArray exportFile(name)
Synchronously reads the contents of the given file into a Uint8Array and returns it. This will throw if the given name is not currently in active use or on I/O error. Note that the given name is not visible directly in OPFS (or, if it is, it's not from this VFS). The reason for that is that this VFS manages name-to-file mappings in a roundabout way in order to maintain its list of SAHs.number getCapacity()
Returns the number of files currently contained in the SAH pool. The default capacity is only large enough for one or two databases and their associated temp files.number getFileCount()
Returns the number of files from the pool currently allocated to VFS slots. This is not the same as the files being "opened".array getFileNames()
Returns an array of the names of the files currently allocated to VFS slots. This list is the same length asgetFileCount()
.int importDb(name, byteArray)
Imports the contents of an SQLite database, provided as a byte array or ArrayBuffer, under the given name, overwriting any existing content. Results are undefined if it is used on an opened db. Throws if the pool has no available file slots, on I/O error, or if the input does not appear to be a database. In the latter case, only a cursory examination is made. Note that this routine is only for importing database files, not arbitrary files, the reason being that this VFS will automatically clean up any non-database files so importing them is pointless. On a write error, the handle is removed from the pool and made available for re-use. On success, the number of bytes written is returned.
If the imported database is in WAL mode then it is forced out of WAL mode for historical reasons which no longer (as of 3.47) strictly apply but which must be retained for backwards compatibility (see WAL mode for more details about WAL).[async] int importDb(name, function)
(added in version 3.44)
If passed a function for its second argument then its behaviour changes to async and it imports its data in chunks fed to it by the given callback function. It calls the callback (which may be async) repeatedly, expecting either a Uint8Array or ArrayBuffer (to denote new input) orundefined
(to denote EOF). For so long as the callback continues to return non-undefined
, it will append incoming data to the given VFS-hosted database file. When called this way, the resolved value of the returned Promise is the number of bytes written to the target file.
If the imported database is in WAL mode then it is forced out of WAL mode because this build does not support WAL.[async] number reduceCapacity(n)
Removes up ton
entries from the pool, with the caveat that it can only remove currently-unused entries. It returns a Promise which resolves to the number of entries actually removed.[async] boolean removeVfs()
Unregisters the VFS and removes its directory from OPFS (which means all client content is destroyed). After calling this, the VFS may no longer be used and there is currently no way to re-add it aside from reloading the current JavaScript context.- Results are undefined if a database is currently in use with this VFS.
- The returned Promise resolves to true if it performed the removal and false if the VFS was not installed.
- If the VFS has a multi-level directory, e.g. "/foo/bar/baz", only the bottom-most directory is removed because this VFS cannot know for certain whether the higher-level directories contain data which should be removed.
[async] number reserveMinimumCapacity(min)
If the current capacity is less thanmin
, the capacity is increased tomin
, else this returns with no side effects. The resulting Promise resolves to the new capacity.boolean unlink(filename)
If a virtual file exists with the given name, disassociates it from the pool and returns true, else returns false without side effects. Results are undefined if the file is currently in active use. Recall that names need to use absolute paths (starting with a slash).string vfsName
The SQLite VFS name under which this pool's VFS is registered.[async] void wipeFiles()
Clears all client-defined state of all SAHs and makes all of them available for re-use by the pool. Results are undefined if any such handles are currently in use by an sqlite3 db instance.
Concurrency
The opfs-sahpool
VFS cannot offer any concurrency support at the
library level because it pre-allocates all potential SAHs, which
immediately locks those files. However, Roy Hashimoto has written
articles exploring client-level solutions to that problem:
- https://github.com/rhashimoto/wa-sqlite/discussions/81
- https://github.com/rhashimoto/wa-sqlite/discussions/84
There is yet some work to be done in this VFS to assist in implementing client-side concurrency, e.g. the ability to stop and restart the VFS.
WAL Mode with OPFS
As of version 3.47, it is possible to activate WAL mode for OPFS-hosted databases with the following caveats:
- Because the WASM build does not have shared memory APIs, activating
WAL requires that a client specifically activate exclusive-locking
mode for a db handle immediately after opening it, before doing
anything else with it, as documented in the WAL
docs and
summarized here:
pragma locking_mode=exclusive
- WAL mode does not provide any concurrency benefits in this
environment. On the contrary, the requirement for exclusive locking
eliminates all concurrency support from the
"opfs"
VFS. - The
"opfs-sahpool"
VFS may, depending on the host environment, gain a slight performance boost when using WAL. Testing has not revealed an equivalent benefit for the"opfs"
VFS.
OPFS over WASMFS
((Re-)Added in 3.43.)
An alternative to the OPFS VFS or SharedAccessHandle Pool VFS is Emscripten's WASMFS, which supports OPFS in a much different manner than either of those VFSes do. It exposes OPFS as a "mount point" (directory) on the virtual filesystem which Emscripten exposes to client code, and all files stored under that directory are housed in OPFS.
Achtung: the WASMFS is not enabled in the canonical builds of
sqlite3.wasm
because it requires a separate, less portable WASM
build and provides little benefit over the other OPFS-using
options. Building it requires a local checkout of the sqlite3 source
tree and a "recent version" of the Emscripten SDK on a Linux
system3:
$ ./configure --enable-all
$ cd ext/wasm
$ make wasmfs
The resulting deliverables are jswasm/sqlite3-wasmfs.*
and
(optionally) jswasm/sqlite3-opfs-async-proxy.js
, though the
latter is only required if clients should also have access to
the OPFS VFS. Aside from the WASMFS support, it is used
identically to the non-WASMFS deliverables.
Pros:
- Simple usage: WASMFS hides the concrete storage mechanisms behind
Emscripten's filesystem API and enables multiple storage back-ends
to be "mounted" in a single virtual filesystem. Calling
sqlite3.capi.sqlite3_wasmfs_opfs_dir()
initializes (if necessary) the WASMFS+OPFS combination and the path it returns is the top-most path of the OPFS "mount point." All files stored under that directory are stored in the current origin's OPFS storage. If that function returns an empty string then the WASMFS+OPFS combination are not available on the client. - It should work on Safari versions 16.x, unlike the OPFS VFS.
- Performant. This filesystem generally outperforms the OPFS VFS, at a functional cost described below...
Cons:
- WASMFS is a third-party project and is, as of 2024-07, labeled as a "work in progress" and subject to change at any time. We cannot guaranty long-term API/usage stability of WASMFS.
- No concurrency support for databases. Each handle to a WASMFS-hosted OPFS file holds an exclusive lock for as long as the file is opened. If a client web page is opened in two tabs at once, the second tab will fail to open the database. It's conceivable that high-level locking could be introduced at the application level by, e.g., opening and closing databases as needed and using WebLocks to coordinate concurrency.
- The COOP/COEP headers are required for the whole library, as opposed to just for OPFS support. That is, this version cannot be deployed at all unless those headers are emitted.
- It is not as portable as the canonical build. e.g. it does not work on ARM64 platforms (some mobile devices) the last time it was tested on such.
- This build is only available as an ES6 module, not as "vanilla" JS, and only works when loaded from a Worker. WASMFS+OPFS does not work from the main thread.
Despite the drawbacks, the WASMFS build may be a viable option for certain types of client applications.
Brief example:
const dirName = sqlite3.capi.sqlite3_wasmfs_opfs_dir()
if( dirName ) {
/* WASMFS OPFS is active ... All files stored under
the path named by dirName are housed in OPFS. */
}
else {
/* WASMFS OPFS is not available */
}
Though the mount point name is intended to stay stable, client code
should avoid hard-coding it anywhere and always use
sqlite3_wasmfs_opfs_dir()
to fetch it. It will not change in the
lifetime of a single session, so it may be saved for reuse, but it
should not be hard-coded. On builds which don't have WASMFS+OPFS
support, that function always returns an empty string.
Maintaining OPFS-hosted Files
For SQLite's purposes, the OPFS API is an internal implementation detail which is not exposed directly to client code. This means, for example, that the SQLite API cannot be used to traverse the list of files stored in OPFS, nor to delete a database file4. Though it may initially seem feasible to provide a virtual table which provides a list of OPFS-hosted files, and the ability to delete them, that cannot work because the relevant OPFS APIs are all asynchronous, making them impossible to use together with the C-level SQLite APIs.
As of this writing, the following possibilities are known for managing such files:
- Using the OPFS API from the browser's developer console. This is not for the faint-of-heart but will do in a pinch.
- The OPFS Explorer extension for Chromium-based browsers provides an interactive tree of OPFS-hosted files for a given HTTP origin.
OPFS Storage Limits
OPFS storage limits are generous but differ per environment. See the MDN docs on the topic for full details. This article by Patrick Brosset also covers the topic in considerable detail.
Note that, as with other storage backends, the SQLite API has no notion of what the limits are. If the limits are exceeded, SQLite will respond with generic I/O errors.
Sidebar: Cross-thread Communication via OPFS
sqlite3 over OPFS opens up a possibility in JS which does not otherwise readily exist: communication between arbitrary threads.
Native JavaScript options for communicating across threads is limited
to, e.g., postMessage()
, SharedArrayBuffer
, and (to a very limited
extent) Atomics
. localStorage
, sessionStorage
, and the
long-defunc WebSQL, are main-thread only. Presumably WebSQL was not
permitted in Workers for the very reason that it would open up a
communication channel, as well as locking contention, between
arbitrary threads.
If a client loads the sqlite3 module from multiple threads, they can, via the initial OPFS VFS, communicate freely via a database. Mostly. Such usage would introduce file-locking contention between the threads. So long as each thread uses only very brief transactions, the automatic lock-retry mechanism will transparently account for the locking, but as soon as one thread holds a transaction open for any significant amount of time, or too many threads are contending for access, locking-related exceptions would result and would be translated as I/O errors to the C API.
Whether the capability of communicating across threads via a database is a feature or a bug is left for the client to decide.
Sidebar: Mysterious Disappearance of Databases
Users sometimes report that their OPFS databases randomly disappear. There is no code in this project which will delete a database without an explicit request from a client, but databases nonetheless sometimes disappear for environment-specific reasons outside of this library's control, including, but not limited to:
- Virus scanners
- "Computer Cleaner" software
- Browser-level storage permissions
- A browser-internal decision to clean up on its own
See this forum post for some discussion about these.
- ^ The whole JS/WASM effort of the sqlite project initially stemmed from interest in getting it working with OPFS.
- ^ The alternative being to fail the operation.
- ^ As of this writing (2023-07-13), EMSDK 3.1.42 is known to work and it is unknown whether any older versions work. WASMFS support evolved considerably in the year prior to that release, sometimes in incompatible ways.
- ^ Noting that the C APIs also do not expose such platform-specific APIs.