Persistent Storage Options

This API provides database persistence via localStorage/sessionStorage and, in compatible browsers, the Origin-Private FileSystem.

Achtung: Incognito and Guest Browsing Modes

Most browsers offer "incognito" and/or "guest" browsing modes which intentionally change or disable certain capabilities of the browser. When running in such a mode, storage capabilities might be adversely affected, e.g. with lower quotas or a complete lack of persistence. The exact limits imposed vary per browser, but it is not entirely unexpected that the persistence features described on this page will, when run in such a "stealth" mode, either be more limited than the documentation suggests, or may even be completely unavailable.

"How do we detect these cases in advance?" is a fair question, but browser makers intentionally make it difficult to detect such modes in order to prevent, e.g., sites from restricting access to incognito-mode users. Any current manner of detecting this in any given browser may quickly become obsolete as the browser makers catch on and change things to make such modes more opaque to visited sites, so we cannot offer any advice on how to circumvent them.

Key-Value VFS (kvvfs): localStorage and sessionStorage

kvvfs is an sqlite3_vfs implementation conceived and created to store a whole sqlite3 database in the localStorage or sessionStorage objects. Those objects are only available in the main UI thread, not Worker threads, so this feature is only available in the main thread. kvvfs stores each page of the database into a separate entry of the storage object, encoding each database page into an ASCII form so that it's JS-friendly.

This VFS supports only a single database per storage object. That is, there can be, at most, one localStorage database and one sessionStorage database.

To use it, pass the VFS name "kvvfs" to any database-opening routine which accepts a VFS name. The file name of the db must be either local or session, or their aliases :localStorage: and :sessionStorage:. Any other names will cause opening of the db to fail. When using URI-style names, use one of:

When loaded in the main UI thread, the following utility methods are added to the sqlite3.capi namespace:

In both cases, the argument may be one of ("local", "session", ""). In the first two cases, only localStorage resp. sessionStorage are acted upon and in the latter case both are acted upon.

Storage limits are small: typically 5MB, noting that JS uses a two-byte character encoding so the effective storage space is less than that. The encoding of the database into a format JS can make use of is slow and consumes a good deal of space, so these storage options are not recommended for any "serious work." Rather, they were added primarily so that clients who do not have OPFS support can have at least some form of persistence.

When the storage is full, database operations which modify the db will fail. Because of the inherent inefficiency of storing a database in persistent JS objects, which requires encoding them in text form, databases in kvvfs are larger than their on-disk counterparts and considerably slower (computationally speaking, though the perceived performance may be fast enough for many clients).

JsStorageDb: kvvfs the Easy Way

Using the kvvfs is much simpler with the OO1 API. See the JsStorageDb class for details.

Importing Databases into kvvfs

The most straighforward way to import an existing database into the kvvfs is using VACUUM INTO from a separate database. For example:

let db = new sqlite3.oo1.DB();
db.exec("create table t(a); insert into t values(1),(2),(3)");
db.exec("VACUUM INTO 'file:local?vfs=kvvfs'");
// Will fail if there's already a localStorage kvvfs:
//   sqlite3.js:14022 sqlite3_step() rc= 1 SQLITE_ERROR SQL = VACUUM INTO 'file:local?vfs=kvvfs'
// But we can fix that by clearing the storage:
sqlite3.capi.sqlite3_js_kvvfs_clear('local');
// Then:
db.exec("VACUUM INTO 'file:local?vfs=kvvfs'");
db.close();
let ldb = new sqlite3.oo1.JsStorageDb('local');
ldb.selectValues('select a from t order by a'); // ==> [1,2,3]

The Origin-Private FileSystem (OPFS)

The Origin-Private FileSystem, OPFS, is an API providing browser-side persistent storage which, not coincidentally, sqlite3 can use for storing databases1.

OPFS is only available in Worker-thread contexts, not the main UI thread.

As of July 2023 the following browsers are known to have the necessary APIs:

This library offers multiple solutions for storing databases in OPFS, each with distinct trade-offs.

OPFS via sqlite3_vfs

This support is only available when sqlite3.js is loaded from a Worker thread, whether it's loaded in its own dedicated worker or in a worker together with client code. This OPFS wrapper implements an sqlite3_vfs wrapper entirely in JavaScript.

This feature is activated automatically if the browser appears to have the necessary APIs to support it. It can be tested for in JS code using one of:

if(sqlite3.capi.sqlite3_vfs_find("opfs")){ ... OPFS VFS is available ... }
// Alternately:
if(sqlite3.oo1.OpfsDb){ ... OPFS VFS is available ... }

If it is available, the VFS named "opfs" can be used with any sqlite3 APIs which accept a VFS name, such as sqlite3_vfs_find(), sqlite3_db_open_v2(), and the sqlite3.oo1.DB constructor, noting that OpfsDb is a convenience subclass of oo1.DB which automatically uses this VFS. For URI-style names, use file:my.db?vfs=opfs.

⚠️Achtung: Safari versions < 17

Safari versions less than version 17 are incompatible with the current OPFS VFS implementation because of a bug in the browser's storage handling from sub-workers for which there is no workaround. Both the SharedAccessHandle pool VFS and the WASMFS support offer alternatives which should work with Safari versions 16.4 or higher.

⚠️Achtung: COOP and COEP HTTP Headers

In order to offer some level of transparent concurrent-db-access support, JavaScript's SharedArrayBuffer type is required for the OPFS VFS, and that class is only available if the web server includes the so-called COOP and COEP response headers when delivering scripts:

Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin

Without these headers, the SharedArrayBuffer will not be available, so the OPFS VFS will not load. That class is required in order to coordinate communication between the synchronous and asynchronous parts of the sqlite3_vfs OPFS proxy.

The COEP header may also have a value of credentialless, but whether or not that will work in the context of any given application depends on how it uses other remote assets.

How to emit those headers depends on the underlying web server.

Apache Web Server

For an Apache web server, these headers are set using something like the following:

<Location "/">
  Header always append Cross-Origin-Embedder-Policy "require-corp"
  Header always append Cross-Origin-Opener-Policy: "same-origin"
  AddOutputFilterByType DEFLATE application/wasm
</Location>

Althttpd Web Server

For the althttpd web server, start it with the --enable-sab flag ("sab" being short for SharedArrayBuffer).

Cloudflare Pages

See https://developers.cloudflare.com/pages/configuration/headers.

Other Web Servers

If you know how to set the COOP/COEP headers in other web servers, please let us know on the SQLite forum and we will update these docs to include that information.

Directory Parts in DB Names

Unlike most sqlite3_vfs implementations, this one will automatically create any leading directory parts of a database file name if the database is opened with the "create" flag. This divergence from common conventions was made in the interest of...

For example:

const db = new sqlite3.oo1.OpfsDb('/path/to/my.db','c');

will, if needed, create the /path/to directories. Paths without a leading slash are functionally equivalent, starting at the OPFS root.

Importing Databases into OPFS

See the OpfsDb docs.

Concurrency and File Locking

Forewarning: concurrent access to OPFS-hosted files is a pain point for this VFS. Client applications will not be able to achieve desktop-app-grade concurrency in this environment, but a moderate degree of concurrency across browser tabs and/or Workers is possible.

Background: OPFS offers a handful of synchronous APIs which are required by this API. A file can be opened in asynchronous mode without any sort of locking, but acquiring access to the synchronous APIs requires what OPFS calls a "sync access handle," which exclusively locks the file. So long as an OPFS file is locked, it may not be opened by any other service running in that same HTTP origin. e.g. code running in one browser tab cannot access that file so long as it is locked by another tab running from that same origin.

In essence, that means that no two database handles can have the same OPFS-hosted database open at one time. If the same page is opened in two tabs, the second tab will hit a locking error the moment it tries to open the same OPFS-hosted database!

To help alleviate contention between instances of sqlite3 running in multiple tabs or worker threads, sqlite3 only acquires a write-mode handle when the database API requires a lock. If it cannot acquire a lock, it will wait a brief period and try again, repeating this several times before giving up. Failure to obtain a lock will bubble up through to client-level code in the form of a generic I/O error. The OPFS API does not have a way to unambiguously distinguish locking-related errors from other I/O errors, so an I/O error is what the client will see.

The reliable limit of connections on a given OPFS-hosted database is unknown and depends largely on how the environment and how the database is used. Basic tests suggests that 3 connections can, if they limit their work to small chunks, reliably cooperate. The chance of locking failure increases quickly for each connection beyond that. However, that value is highly environment-dependent. For example, Chrome versions 116 and higher have been seen to run 5 connections reliably on a relatively fast machine (3GHz+).

Here are some hints to help improve OPFS concurrency, in particular in the face of a client opening an application via multiple tabs.

Work to improve concurrency support on OPFS-hosted databases is an ongoing process. As OPFS's locking support evolves, and more fine-grained control of locking becomes widely available, the sqlite3 VFS will take advantage of it to help improve concurrency.

Unlock-ASAP Mode

Sometimes sqlite3 will call into a VFS without explicitly acquiring a lock on the storage in advance (on journal files, for example). When it does so, an operation which requires a sync access handle necessarily acquires the lock itself2 and holds on to it until the VFS is idle for some unspecified brief period (less than half a second), at which point all implicitly-acquired locks are relinquished.

Such locks are internally called implicit locks or auto-locks. They are locks which are not required by the sqlite3 VFS but are required by OPFS. Normally, an operation which acquires the lock does not automatically relinquish the lock at the end of the operation because doing so imposes a tremendous performance hit (up to 400% run time increase in I/O-heavy benchmarks). However, concurrency can be improved considerably by telling the VFS to immediately relinquish such locks at the earliest opportunity. This is colloquially known as "unlock-asap" mode, and it's disabled by default because of the performance penalty but can be enabled by clients on a per-db-connection basis using URI-style database names:

file:db.file?vfs=opfs&opfs-unlock-asap=1

That string can be supplied to any APIs which permit URI-style filenames. For example:

new sqlite3.oo1.OpfsDb('file:db.file?opfs-unlock-asap=1');

That flag should only be used if an application specifically has concurrency-related issues. If all else fails, opfs-unlock-asap=1 might help, but whether or not it genuinely will depends largely on how the database is being used. e.g. long-running transactions will lock it regardless of whether the opfs-unlock-asap option is used.

OPFS SyncAccessHandle Pool VFS

(Added in SQLite v3.43.)

The "opfs-sahpool" ("sah" = SyncAccessHandle) VFS is a OPFS-based sqlite3_vfs implementation which takes a much different strategy than the "opfs" VFS. The differences can be summarized as...

Advantages:

Disadvantages:

Note that the "opfs" VFS and this VFS may be used in the same application but they will refer to different OPFS-level files, even if they use the same client-level file names, because this VFS does not map client-provided names directly to OPFS files, and instead maintains those names in its own metadata.

Peculiarities of this VFS:

This VFS is based on the work of Roy Hashimoto, with his blessing, specifically:

Installation

Because this VFS does not support concurrency, initializing it twice, e.g. via two tabs to the same origin, will fail for the second and subsequent instances. In order to enable that an origin can use this VFS on only select pages without them being locked via other pages which may be open, the VFS must be explicitly enabled via application-level code. At its simplest, that looks like:

await sqlite3.installOpfsSAHPoolVfs();

or:

sqlite3.installOpfsSAHPoolVfs().then((poolUtil)=>{
  // poolUtil contains utilities for managing the pool, described below.
  // VFS "opfs-sahpool" is now available, and poolUtil.OpfsSAHPoolDb
  // is a subclass of sqlite3.oo1.DB to simplify usage with
  // the oo1 API.
}).catch(...);

Installation will fail if:

installOpfsSAHPoolVfs() accepts a configuration object with any of the following options:

The resolved value of the Promise returned by installOpfsSAHPoolVfs(), abstractly referred to as PoolUtil below (though the object has no inherent name and a reference, if needed, must be held and named by the client), is described in the next section.

The asynchronous (necessarily so) installation routine will, on success, register the VFS with the name specified in the options object. The VFS's presence can be detected using sqlite3_vfs_find(options.name). PoolUtil.OpfsSAHPoolDb is a sqlite3.oo1.DB class subclass which uses this VFS:

const db = new PoolUtil.OpfsSAHPoolDb('/filename');

Pool Management

installOpfsSAHPoolVfs() returns a Promise which resolves, on success, to a utility object which can be used to perform basic administration of the file pool (colloquially known as PoolUtil). Calling installOpfsSAHPoolVfs() more than once will resolve to the same value on second and subsequent invocations so long as the same name option is used in each invocation. Calling it with a different name will return different Promises resolving to different objects with different VFS registrations.

Its API includes, in alphabetical order...

Concurrency

The opfs-sahpool VFS cannot offer any concurrency support at the library level because it pre-allocates all potential SAHs, which immediately locks those files. However, Roy Hashimoto has written articles exploring client-level solutions to that problem:

There is yet some work to be done in this VFS to assist in implementing client-side concurrency, e.g. the ability to stop and restart the VFS.

OPFS over WASMFS

((Re-)Added in 3.43.)

An alternative to the OPFS VFS or SharedAccessHandle Pool VFS is Emscripten's WASMFS, which supports OPFS in a much different manner than either of those VFSes do. It exposes OPFS as a "mount point" (directory) on the virtual filesystem which Emscripten exposes to client code, and all files stored under that directory are housed in OPFS.

Achtung: the WASMFS is not enabled in the canonical builds of sqlite3.wasm because it requires a separate, less portable WASM build and provides little benefit over the other OPFS-using options. Building it requires a local checkout of the sqlite3 source tree and a "recent version" of the Emscripten SDK on a Linux system3:

$ ./configure --enable-all
$ cd ext/wasm
$ make wasmfs

The resulting deliverables are jswasm/sqlite3-wasmfs.* and (optionally) jswasm/sqlite3-opfs-async-proxy.js, though the latter is only required if clients should also have access to the OPFS VFS. Aside from the WASMFS support, it is used identically to the non-WASMFS deliverables.

Pros:

Cons:

Despite the drawbacks, the WASMFS build may be a viable option for certain types of client applications.

Brief example:

const dirName = sqlite3.capi.sqlite3_wasmfs_opfs_dir()
if( dirName ) {
  /* WASMFS OPFS is active ... All files stored under
     the path named by dirName are housed in OPFS. */
}
else {
  /* WASMFS OPFS is not available */
}

Though the mount point name is intended to stay stable, client code should avoid hard-coding it anywhere and always use sqlite3_wasmfs_opfs_dir() to fetch it. It will not change in the lifetime of a single session, so it may be saved for reuse, but it should not be hard-coded. On builds which don't have WASMFS+OPFS support, that function always returns an empty string.

Maintaining OPFS-hosted Files

For SQLite's purposes, the OPFS API is an internal implementation detail which is not exposed directly to client code. This means, for example, that the SQLite API cannot be used to traverse the list of files stored in OPFS, nor to delete a database file4. Though it may initially seem feasible to provide a virtual table which provides a list of OPFS-hosted files, and the ability to delete them, that cannot work because the relevant OPFS APIs are all asynchronous, making them impossible to use together with the C-level SQLite APIs.

As of this writing, the following possibilities are known for managing such files:

OPFS Storage Limits

OPFS storage limits are generous but differ per environment. See the MDN docs on the topic for full details. This article by Patrick Brosset also covers the topic in considerable detail.

Note that, as with other storage backends, the SQLite API has no notion of what the limits are. If the limits are exceeded, SQLite will respond with generic I/O errors.

Sidebar: Cross-thread Communication via OPFS

sqlite3 over OPFS opens up a possibility in JS which does not otherwise readily exist: communication between arbitrary threads.

There are no mechanisms in JS to share state between two threads except postMessage(), SharedArrayBuffer, and (to a very limited extent) Atomics. localStorage, sessionStorage, and the long-defunc (but still extant) WebSQL, are main-thread only. Presumably WebSQL was not permitted in Workers for the very reason that it would open up a communication channel between arbitrary threads.

However, if a client loads the sqlite3 module from multiple threads, they can communicate freely via an OPFS-hosted database. Mostly. Such usage would introduce file-locking contention between the threads. So long as each thread uses only very brief transactions, the automatic lock-retry mechanism will transparently account for the locking, but as soon as one thread holds a transaction open for any significant amount of time, or too many threads are contending for access, locking-related exceptions would result and would be translated as I/O errors to the C API.

Whether the capability of communicating across threads via a database is a feature or a bug is left for the client to decide.

Sidebar: Mysterious Disappearance of Databases

Users sometimes report that their OPFS databases randomly disappear. This happens for environment-specific reasons outside of this library's control, including, but not limited to:

See this forum post for some discussion about these.


  1. ^ The whole JS/WASM effort of the sqlite project initially stemmed from interest in getting it working with OPFS.
  2. ^ The alternative being to fail the operation.
  3. ^ As of this writing (2023-07-13), EMSDK 3.1.42 is known to work and it is unknown whether any older versions work. WASMFS support evolved considerably in the year prior to that release, sometimes in incompatible ways.
  4. ^ Noting that the C APIs also do not expose such platform-specific APIs.