SQLite Forum

In-memory database from sqlite file
Login

In-memory database from sqlite file

(1) By Simone Baratta (reisub) on 2020-09-22 12:14:28 [link]

Hello everybody,

Is it possible to read an sqlite file into a memory buffer, and then use that memory buffer as the database to connect to?

Context: I have a program using the SQLite3 library, which must read from time to time a table from an SQLite database. However, that database is usually locked in exclusive mode by another process (which I don't have control over), so that reading the DB file directly is impossible. A solution I have found is to just copy the file to a temporary file and then open that from SQLite - however, the overhead for this is very high (I've measured 25ms per access, versus 0.5ms if I just access the database directly without making a copy first). So a good solution would be to read the database in-memory (without making an actual copy), and then connect to that in-memory copy from SQLite. However, looking at the docs I could only see how I can create a new in-memory database, which is not my use case.

Thanks in advance!

 --
Simone

(2) By curmudgeon on 2020-09-22 13:45:34 in reply to 1 [link]

Not sure if this would help

https://www.sqlite.org/backup.html

(3) By David Raymond (dvdraymond) on 2020-09-22 13:49:19 in reply to 1 [link]

I believe you'd be looking to use the [deserialize](https://www.sqlite.org/c3ref/deserialize.html) interface.

Bear in mind though that the other process could be in the middle of writing to the file at any moment, and thus your memory copy of the file contents could be completely inconsistent and broken. There's pretty much nothing you can do to prevent that when you're skirting around the mechanisms put in place to ensure consistency.

(4) By Ryan Smith (cuz) on 2020-09-22 16:44:07 in reply to 1 [link]

I have recently done exactly this and can tell you that Serialize and Deserialize work fantastically well to achieve this.

The things you need to consider are:

1. To get the DB file INTO Serialized form, you need to open it with an SQLite connection and tell the connection to [Serialize it to bytes](https://sqlite.org/c3ref/serialize.html) - which means you can do it both from a physical disk-file or from an in-memory DB to start with.
 - I believe you *can* just read the file-bytes too (I haven't tested it) but then you have no control over the file state and whether it is open, has roll-back journals, etc. A connection is the way to know for sure you get a good serialized byte-stream and, from your description, it sounds like what your use-case needs.
 - You can of course put these bytes into a file again or a pipe or data transport, etc.

2. To [Deserialize the bytes](https://sqlite.org/c3ref/deserialize.html) into memory, you first have to create and open an SQLite connection for a normal In-Memory DB, then give that connection as the Deserialization target DB, along with some flags and the bytes. This bit is pretty straight forward.

3. You have to compile SQLite with [SQLITE_ENABLE_DESERIALIZE - The Serialize/Deserialize switch](https://sqlite.org/compile.html#enable_deserialize). It's not on by default.


And that's it. We use this for multiple very fast uploads to a central station, which push the received bytes into in-memory DBs and extract data as needed. It's very easy, very fast and works very well.

(5) By Simone Baratta (reisub) on 2020-09-23 13:08:52 in reply to 3

Thanks so much for all your answers!

> I believe you'd be looking to use the [deserialize](https://www.sqlite.org/c3ref/deserialize.html) interface.

This looks exactly like what we need! The [Go bindings that we use](https://github.com/mattn/go-sqlite3) don't support this functionality yet, but it would be a worthy addition to discuss with the author.

> Bear in mind though that the other process could be in the middle of writing to the file at any moment, and thus your memory copy of the file contents could be completely inconsistent and broken. There's pretty much nothing you can do to prevent that when you're skirting around the mechanisms put in place to ensure consistency.

Isn't consistency one of the main advantages of a SQL database? How can a database get corrupted by copying a DB file to memory (or to a temporary file, like in our current solution) while in the middle of a transaction? What I mean is, this shouldn't be any different from a power outage that happens while the DB is being written to, and to my understanding SQLite guarantees consistency in this scenario.

Note: I am not a DB expert, so my understanding of the similarities between these two scenarios could be just completely flawed. :)

(6) By Simone Baratta (reisub) on 2020-09-23 13:11:56 in reply to 4 [link]

Thanks Ryan. 

> To get the DB file INTO Serialized form, you need to open it with an SQLite connection and tell the connection to Serialize it to bytes - which means you can do it both from a physical disk-file or from an in-memory DB to start with.

I don't think this is possible when the database has an exclusive lock? I can't even run simple commands like `.schema` when the lock is on. Which is why I fell back to the "let's make a copy and pray" approach.

> You have to compile SQLite with SQLITE_ENABLE_DESERIALIZE - The Serialize/Deserialize switch. It's not on by default.

Uhm, this is really useful to know. Thanks!

(7) By David Raymond (dvdraymond) on 2020-09-23 13:31:12 in reply to 5 [link]

Consistency is assured by use of either the rollback journal or the wal file depending on the journaling mode. The two files together (the main db and the rollback journal/wal file (if they exist)) are what makes up a complete database which is recoverable. You can be in the middle of writing to the main file and have the main file completely messed up, but because we made a backup copy to the rollback journal, then we can recover if everything dies while we're mucking around in the main file.

Have you ever done the case of "let me make a copy of this before I edit it in case I mess something up"? The rollback journal is basically that, but at the page level rather than the whole file level. So at any given point in time, the contents of the main file could be completely broken because we're making changes, but it's ok because we made a copy over there of everything we touched in case we need it to get the original back.

Since you're completely ignoring the locking mechanism and just taking the file contents by brute force whether the process using it is done cleaning up after itself or not, then you can't be sure what you're grabbing is complete.

(8) By Scott Robison (casaderobison) on 2020-09-23 21:02:56 in reply to 6 [link]

You are probably correct that it would not be possible to accomplish when the database is locked for exclusive use. The reason things like .schema don't work in that context is because the database is locked, but the different SQLite instances know how to cooperate to indicate the locked status. Other processes (such a copying the file) don't know how to cooperate, and thus might get an inconsistent state of the database while it is being updated by another process.

If there is some data in the database that you need to access even while it is locked, it might make sense to have the process that locks the database for exclusive access be the one that is responsible for making a copy of the relevant data for third parties to use. Or find a way to not hold long running exclusive locks. What journal mode are you using? Perhaps it would be adequate to have one lock running write lock and everyone else could hold read locks on the data, in which case maybe wal mode would work (if all the processes are on the same computer).

(9) By Simone Baratta (reisub) on 2020-09-25 11:36:13 in reply to 7 [link]

Thanks for the insight David.

> Since you're completely ignoring the locking mechanism and just taking the file contents by brute force whether the process using it is done cleaning up after itself or not, then you can't be sure what you're grabbing is complete.

This is an interesting point. Does this mean that by doing this the risk is limited to losing some records (or not being able to read anything at all)? Or is this completely undefined behaviour (so we could read garbage data that is not present in the DB)?

In our specific case, the DB tables are append-only, and we are only interested in detecting new entries after they have been added, with no particular latency requirements. So if we risked missing some records that would be ok - we would just pick them up later on; but if the risk is that we read data that is wrong or not supposed to be there, then we're in trouble.

(10) By Simone Baratta (reisub) on 2020-09-25 11:38:08 in reply to 8 [link]

> If there is some data in the database that you need to access even while it is locked, it might make sense to have the process that locks the database for exclusive access be the one that is responsible for making a copy of the relevant data for third parties to use. Or find a way to not hold long running exclusive locks.

Yeah we thought about that, but unfortunately the process holding the DB lock is a third party process that we have no control over :( thanks for the input anyways!

(11) By Kees Nuyt (knu) on 2020-09-25 13:41:53 in reply to 9 [link]

Even if you only append rows, you are in trouble when you happen to copy the database in the middle of a transaction, whether it is implicit or explicit.

Any insert, update or delete may affect more than one database page.

Rows are kept in a [BTree structure](https://www.sqlite.org/fileformat2.html#b_tree_pages), stored in pages.

Any modification will change a leaf page, some modifications will also change interior pages.

By copying in the middle of a transaction, you break the [atomicity](https://www.sqlite.org/atomiccommit.html) of the commit process. You may copy pages with states before and after modification. 

So, the resulting copy may be corrupt.

```
-- 
Regards,
Kees Nuyt
```

(12) By David Raymond (dvdraymond) on 2020-09-25 13:51:09 in reply to 9 [link]

> This is an interesting point. Does this mean that by doing this the risk is limited to losing some records (or not being able to read anything at all)? Or is this completely undefined behaviour (so we could read garbage data that is not present in the DB)?

I would assume so yes. Let's see if we can construct a problem.

We've got a 1 table database. table1 is made of a single leaf page on page 2. sqlite_master is on page 1 with a record saying table1 starts on page 2.

You're now inserting into table1, and it's grown enough that it needs a new page to hold data. Since it's now more than 1 page long, we're gonna need a new trunk page as well as another leaf page. Let's put the trunk page on page 3, and the new leaf page on page 4.

So we're at least gonna have to update page 1 with the new root page of table1 which is now page 3. We're gonna have to update page 3 with the pointers to pages 2 and 4. And we're gonna have to update page 4 with the new insert into. If we're inserting out of rowid order we might need to update page2 with the new info, and move some records to page 4.

Depending on the order these things happen and when you jump in there, you can see all sorts of potential problems. The SQLite library doing the inserts doesn't care, it has the file locked, so from its point of view no one else should be reading.

If we update sqlite_master first before appending to the file then we're pointing to a page that doesn't exist, and you won't see any records. If we move records from page 2 to page 4 before we update sqlite_master, then you could miss those moved records because you've got the wrong root page. Pick any order you want and you can find a spot to jump in and see an incomplete or broken view.

Depending on the circumstances the time windows for getting a bad view might be tiny or they might be huge, but they're definitely there.

(13) By Gunter Hick (gunter_hick) on 2020-09-25 14:16:41 in reply to 9 [link]

When copying a file, most file systems will read the source file in order of ascending offsets (i.e. page number order). When committing a transaction, SQLite may write to several pages scattered throughout the file.

Let's say the fs has copied pages 1 to 10, and SQLite commits a transaction covering pages 1, 5, 10 and 15; when the fs is finished, you will have an inconsistent state, with pages 1, 5 and 10 from before and page 15 after a transaction. This is very likely to be a problem, especially if page 15 is an index page that now refers to an old state of a table page. Or maybe page 10 was split into pages 10 and 15...

(14) By fmzakari on 2023-11-18 23:58:19 in reply to 5 [link]

Why are in-memory databases allowed to grow but if you deserialize a database to in-memory it must be explicitly granted memory for growth ?

(15) By Stephan Beal (stephan) on 2023-11-19 02:24:18 in reply to 14 [link]

> Why are in-memory databases allowed to grow but if you deserialize a database to in-memory it must be explicitly granted memory for growth ?

(A) they can grow - check the API again. (B) it requires specific opt-in because they can only grow if the client can _guaranty_ that the memory came from one of the library's own allocators. Mixing allocators leads to undefined behavior.

Please pardon brevity and lack of a link to the docs - working from a mobile device.

(16) By David Jones (vman59) on 2023-11-20 14:39:45 in reply to 6 [link]

Since 3.36 the serialize functions are enabled by default, you compile with SQLITE_OMIT_DESERIALIZE to disable them.

(17) By skywalk on 2023-11-21 17:48:28 in reply to 10 [link]

Interesting topic.
How long are the locks?
Can you open an in-memory database and ATTACH your file based db?
Then run your queries?
Maybe this is too slow for you?