SQLite Forum

"database or disk is full" with in-memory database. Suggested next steps?
Login

"database or disk is full" with in-memory database. Suggested next steps?

(1) By Philip O'Toole (otoolep) on 2021-08-19 20:29:14 [link] [source]

I'm the creator of rqlite, the distributed database that uses SQLite as its database engine. The following issue was filed against the software: "database or disk is full" on insert or create index.

The only suggestions I could find on the web relate to on-disk databases. But rqlite uses an in-memory SQLite database (vfs=memdb), so it's not clear how to address this issue.

Any help appreciated.

Thanks,

Philip

(2.1) By Keith Medcalf (kmedcalf) on 2021-08-19 22:31:49 edited from 2.0 in reply to 1 [link] [source]

THe memdb.c code returns that result code (SQLITE_FULL) when the database is full (no more memory is available or the number of pages is exceeded).

The solution is to buy more ram and/or to make sure that, once you have more ram, it can be accessed by the process.

If the process is using 32-bit addresses then the maximum memdb size is 1 GB.
If the process is using 64-bit addresses then the maximum memdb size is 1,048,576 TB.
If the process is using 48bit addresses then the maximum memdb size is 32 TB.

Presently, current computer technology uses 48-bits or less.

Limits on ram usage per process can also be set in the Operating System.

In this case it sounds like the memdb is limited to 1 GB either because (1) it ram out virtual memory (2) it is only using 32-bit address space, or (3) the Operating System has been configured to not permit the process access to more memory.

Note also that the memdb max size is configurable

(7) By dissidia (dissidia986) on 2021-08-20 03:25:55 in reply to 2.1 [link] [source]

So how to check the OS limit ? right now I'm using server config as below:

lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.6.1810 (Core)
Release:	7.6.1810
Codename:	Core
lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    2
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
Stepping:              4
CPU MHz:               2494.140
BogoMIPS:              4988.28
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              33792K
NUMA node0 CPU(s):     0-7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 spec_ctrl intel_stibp
lsmem
RANGE                                  SIZE  STATE REMOVABLE   BLOCK
0x0000000000000000-0x0000000057ffffff  1.4G online        no    0-10
0x0000000058000000-0x000000005fffffff  128M online       yes      11
0x0000000060000000-0x000000007fffffff  512M online        no   12-15
0x0000000080000000-0x0000000087ffffff  128M online       yes      16
0x0000000088000000-0x0000000097ffffff  256M online        no   17-18
0x0000000098000000-0x000000009fffffff  128M online       yes      19
0x00000000a0000000-0x00000000bfffffff  512M online        no   20-23
0x0000000100000000-0x0000000187ffffff  2.1G online        no   32-48
0x0000000188000000-0x000000018fffffff  128M online       yes      49
0x0000000190000000-0x0000000197ffffff  128M online        no      50
0x0000000198000000-0x00000001a7ffffff  256M online       yes   51-52
0x00000001a8000000-0x00000002dfffffff  4.9G online        no   53-91
0x00000002e0000000-0x00000002e7ffffff  128M online       yes      92
0x00000002e8000000-0x000000055fffffff  9.9G online        no  93-171
0x0000000560000000-0x0000000567ffffff  128M online       yes     172
0x0000000568000000-0x000000064fffffff  3.6G online        no 173-201
0x0000000650000000-0x0000000657ffffff  128M online       yes     202
0x0000000658000000-0x000000078fffffff  4.9G online        no 203-241
0x0000000790000000-0x0000000797ffffff  128M online       yes     242
0x0000000798000000-0x00000007b7ffffff  512M online        no 243-246
0x00000007b8000000-0x00000007bfffffff  128M online       yes     247
0x00000007c0000000-0x000000083fffffff    2G online        no 248-263

(3) By RandomCoder on 2021-08-19 21:57:19 in reply to 1 [link] [source]

I think this stems from rqlite's use of sqlite3_deserialize(). When using this, the resulting memory database is limited by SQLITE_CONFIG_MEMDB_MAXSIZE, which is documented as defaulting to 1073741824 bytes

You could try increasing that value to see if it solves the issue for this user.

For instance, running a 64-bit build of SQLite on a machine with 256gb of RAM, it's fairly easy to hit this limit:

$ sqlite3 test.db
SQLite version 3.36.0 2021-06-18 18:36:39
Enter ".help" for usage hints.
sqlite> create table a(x);
sqlite> create table b(x);
sqlite> insert into a values ('abcdefghijklmnopqrstuvwxyz');
sqlite> .exit

$ sqlite3 --deserialize test.db
SQLite version 3.36.0 2021-06-18 18:36:39
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE a(x);
CREATE TABLE b(x);
sqlite> insert into aux1.b select x from aux1.a;insert into aux1.a select x from aux1.b;
Error: no such table: aux1.b
sqlite> insert into b select x from a;insert into a select x from b;
[ .. repeated command 16 times .. ]
sqlite> insert into b select x from a;insert into a select x from b;
Error: database or disk is full
sqlite>

SQLite errored out with around 1gb of RAM usage. If I use a memory database directly, SQLite will use much more memory as I repeat the doubling of these two tables.

(4) By Keith Medcalf (kmedcalf) on 2021-08-19 22:14:29 in reply to 3 [link] [source]

All memdb databases have a size limit as specified by SQLITE_CONFIG_MEMDB_MAXSIZE and that, unless specified as some different value when building SQLite3, has a default value of 1 GB.

Therefore if you are certain that you can have at more than 1 GB of memory, simply recompile with the size limit set as you would wish it.

Note also that you can vary this limit at runtime using sqlite3_config before SQLite3 is initialized. See https://sqlite.org/c3ref/config.html

(5) By Keith Medcalf (kmedcalf) on 2021-08-19 22:28:58 in reply to 4 [link] [source]

These sessions use the default (1 GB) and a modified (4 GB) SQLITE_CONFIG_MEMDB_MAXSIZE

SQLite version 3.37.0 2021-08-18 18:05:52
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open "file:///x?vfs=memdb"
sqlite> create table x(x);
sqlite> insert into x select value from wholenumber where value between 1 and 100000000;
Error: database or disk is full
sqlite> ^Z


SQLite version 3.37.0 2021-08-19 19:52:39
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open "file:///x?vfs=memdb"
sqlite> create table x(x);
sqlite> insert into x select value from wholenumber where value between 1 and 100000000;
sqlite>

Note that table x will take about 2 GB of space ...

(6) By Philip O'Toole (otoolep) on 2021-08-20 00:19:43 in reply to 5 [link] [source]

Thanks all -- very helpful.

In rqlite, the SQLite database is not the source of truth (the Raft log is). So, is there any issue with simply pushing the limit up at start-up to, say, something large like 32GB? Perhaps more? Does SQLite allocate the memory at startup? If not, maybe this would allow folks to use all the memory on the machine (and use other techniques to limit it). But smaller databases would still work fine. Worse case the OS kills the rqlite process (OOM), but that doesn't matter -- the authoritative data is on disk in the Raft log.

A goal of rqlite is simple operation. I can always allow users to increase the limit at the rqlite command line, but it would be great if they didn't have to.

I guess I could just test it myself. Any traps?

(8) By Richard Hipp (drh) on 2021-08-20 07:19:22 in reply to 6 [source]

Though SQLite supports very large databases, including in-memory databases, the maximum amount of memory that SQLite will itself allocate in a single go is 2GiB. If you want a deserialized database that is larger than that, you will need to allocate the memory yourself (using some mechanism other than sqlite3_malloc64() which will fail for such large allocations) and pass the memory directly to sqlite3_deserialize(). You won't be able to use the SQLITE_DESERIALIZE_FREEONCLOSE or SQLITE_DESERIALIZE_RESIZEABLE flags.

(9) By Philip O'Toole (otoolep) on 2021-08-20 13:16:57 in reply to 8 [link] [source]

Thank you -- do similar concerns also apply to sqlite3_serialize()? The docs imply that sqlite3_malloc64() will be called to alloc the memory for the returned serialized copy. So if the given in-memory database is greater than 2GiB, will serialization fail?

rqlite does not set SQLITE_SERIALIZE_NOCOPY during its call to sqlite3_serialize() and I do see a calls to sqlite3_malloc64() in sqlite3_serialize() though I have not studied the code in depth.

(10) By J.M. Aranda (JMAranda) on 2021-08-26 11:51:03 in reply to 1 [link] [source]

Spreading the tables across multiple databases is a good practice.

(11) By Philip O'Toole (otoolep) on 2021-09-08 13:07:19 in reply to 1 [link] [source]

So I now tell users of rqlite to work around this limitation by a) creating a RAM-backed file system, and b) using a normal SQLite file for their database, but place the SQLite file on the RAM-backed file system.

My docs: https://github.com/rqlite/rqlite/blob/master/DOC/PERFORMANCE.md#in-memory-database-limits

Hopefully this makes sense. Testing by some of my users tells me it works fine.