Persistent Storage Options

This page covers the storage options directly supported by the sqlite3 JS API.

Web apps have traditionally not had many options for storing persistent state on a client. First came cookies, then localStorage and sessionStorage, plus IndexedDB and the aborted WebSQL effort. WebSQL was dropped before it was standardized and IndexedDB has a poor reputation as having an awkward interface and cross-browser incompatibilities.

As of 2021, browsers started implementing filesystem-style storage, namely Google's Origin-Private FileSystem.

Key-Value VFS (kvvfs): localStorage and sessionStorage

kvvfs is an sqlite3_vfs implementation conceived and created to store a whole sqlite3 database in the localStorage or sessionStorage objects. Those objects are only available in the main UI thread, not Worker threads, so this feature is only available in the main thread. kvvfs stores each page of the database into a separate entry of the storage object, encoding each database page into an ASCII form so that it's JS-friendly.

This VFS supports only a single database per storage object. That is, there can be, at most, one localStorage database and one sessionStorage database.

To use it, pass the VFS name "kvvfs" to any database-opening routine which accepts a VFS name. The file name of the db must be either local or session, or their aliases :localStorage: and :sessionStorage:. Any other names will cause opening of the db to fail. When using URI-style names, use one of:

When loaded in the main UI thread, the following utility methods are added to the sqlite3.capi namespace:

In both cases, the argument may be one of ("local", "session", ""). In the first two cases, only localStorage resp. sessionStorage are acted upon and in the latter case both are acted upon.

Storage limits are small, but localStorage tends to be the larger of the two. The encoding of the database into a format JS can make use of is slow and consumes a good deal of space, so these storage options are not recommended for any "serious work." Rather, they were added primarily so that clients who do not have OPFS support can have at least some form of persistence.

When the storage is full, database operations which modify the db will fail. Because of the inherent inefficiency of storing a database in persistent JS objects, which requires encoding them in text form, databases in kvvfs are larger than their on-disk counterparts and considerably slower (computationally speaking, though the perceived performance may be fast enough for many clients).

JsStorageDb: kvvfs the Easy Way

Using the kvvfs is much simpler with the OO1 API. See the JsStorageDb class for details.

The Origin-Private FileSystem (OPFS)

The Origin-Private FileSystem, OPFS, is an API providing browser-side persistent storage which, not coincidentally, sqlite3 can use for storing databases1.

As of late 2022, only bleeding-edge versions of Chromium-derived browsers have the necessary APIs. Support from other browsers is expected to follow as soon as their developers see all of 2023's web apps targeting Chrome specifically for the persistent sqlite3 support 😉.

This support comes in two flavors:

Both approaches have their benefits and drawbacks and have essentially identical performance. They are not mutally exclusive but it is illegal to open any given database via both approaches at the same time because OPFS exclusively locks files when they are opened.

Achtung: COOP and COEP HTTP Headers

JavaScript's SharedArrayBuffer type is required for both variants of OPFS support, and that class is only available if the web server includes the so-called COOP and COEP response headers when delivering scripts:

Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin

Without these headers, the SharedArrayBuffer will not be available, so OPFS support will not load. That class is required in order to coordinate communication between the synchronous and asynchronous parts of the sqlite3_vfs OPFS proxy and its counterpart in the WASMFS OPFS internals.

OPFS via sqlite3_vfs

This support is only available when sqlite3.js is loaded from a Worker thread, whether it's loaded in its own dedicated worker or in a worker together with client code. This OPFS wrapper implements an sqlite3_vfs wrapper entirely in JavaScript.

This feature is activated automatically if the browser appears to have the necessary APIs to support it. It can be tested for in JS code using one of:

if(sqlite3.capi.sqlite3_vfs_find("opfs")){ ... OPFS VFS is available ... }
// Alternately:
if(sqlite3.oo1.OpfsDb){ ... OPFS VFS is available ... }

If it is available, the VFS named "opfs" can be used with any sqlite3 APIs which accept a VFS name, such as sqlite3_vfs_find(), sqlite3_db_open_v2(), and the sqlite3.oo1.DB constructor, noting that OpfsDb is a convenience subclass of oo1.DB which automatically uses this VFS. For URI-style names, use file:my.db?vfs=opfs.

Directory Parts in DB Names

Unlike most sqlite3_vfs implementations, this one will automatically create any leading directory parts of a database file name if the database is opened with the "create" flag. This divergence from common conventions was made in the interest of...

For example:

const db = new sqlite3.oo1.OpfsDb('/path/to/my.db','c');

will, if needed, create the /path/to directories.

Concurrency and File Locking

Forewarning: concurrent access to OPFS-hosted files is a pain point for this VFS. Client applications will not be able to achieve desktop-app-grade concurrency in this environment, but a moderate degree of concurrency across browser tabs and/or Workers is possible.

Background: OPFS offers a handful of synchronous APIs which are required by this API. A file can be opened in asynchronous mode without any sort of locking, but acquiring access to the synchronous APIs requires what OPFS calls a "sync access handle," which exclusively locks the file. So long as an OPFS file is locked, it may not be opened by any other service running in that same HTTP origin. e.g. code running in one browser tab cannot access that file so long as it is locked by another tab running from that same origin.

In essence, that means that no two database handles can have the same OPFS-hosted database open at one time. If the same page is opened in two tabs, the second tab will hit a locking error the second it tries to open the same OPFS-hosted database!

To help alleviate contention between instances of sqlite3 running in multiple tabs or worker threads, sqlite3 only acquires a write-mode handle when the database API requires a lock. If it cannot acquire a lock, it will wait a brief period and try again, repeating this several times before giving up. Failure to obtain a lock will bubble up through to client-level code in the form of a generic I/O error. The OPFS API does not have a way to unambiguously distinguish locking-related errors from other I/O errors, so an I/O error is what the client will see.

Sidebar: the (former) inability to unambiguously distinguish locking errors has been lifted in late-2022 Chromium versions. See the discussion at for details. It is as yet unclear whether the ability to unambiguously distinguish such cases can be used to improve concurrency support or whether it will simply allow us to fail faster if acquisition of a sync access handle fails for any reason other than the new NoModificationAllowedError.

The reliable limit of connections on a given OPFS-hosted database is unknown and depends largely on how the database is used. Basic tests suggests that 3 connections can, if they limit their work to small chunks, reliably cooperate. The chance of locking failure increases drastically for each connection beyond that.

Here are some hints to help improve OPFS concurrency, in particular in the face of a client opening an application via multiple tabs.

Work to improve concurrency support on OPFS-hosted databases is an ongoing process. As OPFS's locking support evolves, and more fine-grained control of locking becomes widely available, the sqlite3 VFS will take advantage of it to help improve concurrency.

Unlock-ASAP Mode

Sometimes sqlite3 will call into a VFS without explicitly acquiring a lock on the storage in advance (on journal files, for example). When it does so, an operation which requires a sync access handle necessarily acquires the lock itself2 and holds on to it until the VFS is idle for some unspecified brief period (less than half a second), at which point all implicitly-acquired locks are relinquished.

Such locks are internally called implicit locks or auto-locks. They are locks which are not required by the sqlite3 VFS but are required by OPFS. Normally, an operation which acquires the lock does not automatically relinquish the lock at the end of the operation because doing so imposes a tremendous performance hit (up to 400% run time increase in I/O-heavy benchmarks). However, concurrency can be improved considerably by telling the VFS to immediately relenquish such locks at the earliest opportunity. This is colloquially known as "unlock-asap" mode, and it's disabled by default because of the performance penalty but can be enabled by clients on a per-db-connection basis using URI-style database names:


That string can be supplied to any APIs which permit URI-style filenames. Similarly:

new sqlite3.oo1.OpfsDb('file:db.file?opfs-unlock-asap=1');

That flag should only be used if an application specifically has concurrency-related issues. If all else fails, opfs-unlock-asap=1 might help.


Achtung: the WASMFS is not enabled in the canonical builds of sqlite3.wasm because it requires a separate, less portable WASM build and provides little, if any, benefit over the OPFS vfs. Building it requires a local checkout of the sqlite3 source tree and the Emscripten SDK on a Linux system.

Emscripten provides a WASM-native implementation of OPFS support. Unlike the sqlite3_vfs OPFS wrapper (see previous section), this one works in the main UI thread and does not require that databases saved in OPFS use an OPFS-aware sqlite3_vfs. Instead, it "mounts" a specific directory in the virtual filesystem, under which all files are hosted in OPFS storage. Any C code which uses the file I/O APIs defined by C89 or POSIX gets transparently proxied by the Emscripten-provided virtual filesystem, so may also make use of the OPFS storage.


OPFS support via WASMFS has to be explicitly activated from JS. That is achieved via a call to:

const dirName = sqlite3.capi.sqlite3_wasmfs_opfs_dir()
if( dirName ) { ... WASMFS OPFS is active ... }
else { ... WASMFS OPFS is not available ... }

The first time that is called it will, if needed, try to activate the WASMFS OPFS backend and "mount" it on a specific virtual directory. If it succeeds, it will return that directory name, else it will return an empty string (a falsy value). Calls after the first one simply return the directory name returned by the first call, without trying to (re)initialize the OPFS support.

Though the mount point name is intended to stay stable, client code should avoid hard-coding it anywhere and always use this function to fetch it3. It will not change in the lifetime of a single session, so it may be saved for reuse, but it should not be hard-coded.

On builds which don't have WASMFS support, that function always returns an empty string.

Just like in Unix-conventional filesystems, though the mount point has a name and adds a path component to filenames stored there, all files stored under that path are rooted in the OPFS root. That is, the mount point's name is, from the point of view of OPFS, not part of the names of any files stored there via that mount point.

File Locking

OPFS's file locking support is, as of this writing (late 2022), a moving target but WASMFS currently locks OPFS-hosted files when write operations start on a file and release the lock when the last handle to the file is closed. (Noting that WASMFS has its own higher-level abstraction of file handles so that it can support multiple filesystems.)

As OPFS's locking support evolves, and more fine-grained control of locking is made available, WASMFS will a adapt to use those. Those changes will happen outside of this project and will become "automatic" for WASMFS clients.

Sidebar: Cross-thread Communication via OPFS

sqlite3 over OPFS opens up a possibility in JS which does not otherwise readily exist: communication between arbitrary threads.

There are no mechanisms in JS to share state between two threads except postMessage(), SharedArrayBuffer, and (to a very limited extent) Atomics. localStorage, sessionStorage, and the long-defunc (but still extant) WebSQL, are main-thread only. Presumably WebSQL was not permitted in Workers for the very reason that it would open up a communication channel between arbitrary threads.

However, if a client loads the sqlite3 module from multiple threads, they can communicate freely via an OPFS-hosted database. Mostly. Such usage would introduce file-locking contention between the threads. So long as each thread uses only very brief transactions, the automatic lock-retry mechanism will transparently account for the locking, but as soon as one thread holds a transaction open for any significant amount of time, or too many threads are contending for access, locking-related exceptions would result and would be translated as I/O errors to the C API.

Whether the capability of communicating across threads via a database is a feature or a bug is left for the client to decide.

  1. ^ The whole JS/WASM effort of the sqlite project initially stemmed from interest in getting it working with OPFS.
  2. ^ The alternative being to fail the operation.
  3. ^ The mount point is usually named /opfs but that can be changed when bootstrapping the library.