SQLite Forum


50 most recent forum posts by user slavin

13:26 Reply: Multi-connection access to database on USB drive (artifact: 1d6752587a user: slavin)

The way Flash actually works is very slow. It takes a long time – many milliseconds – to write to one part of storage, and a SQLite update requires writing to many parts of storage. As a consequence the drive accepts many 'write' commands, responds to each one instantly as 'I have completed this', then does the writing in its own time, in whatever order it finds convenient ("out-of-order writing").

If a Flash drive waited for each write command to be complete before it responded 'that write is complete', you would find the drive unusablly slow. The fact that it takes a long time for writes to be actually complete is the reason why you have to tell the OS "eject drive" and then wait for a while before it tells you it's safe to eject the drive. The OS is waiting for the drive to tell it that all writes are really complete.

There is no way SQLite can be ACID under conditions like this, since the storage system is not only lying to it about which writes are complete, but doesn't even do the writes in the right order.

You may think you've found a pattern of writing, and some SQLite settings, which result in uncorrupted databases, but you've actually just found something that works under one scenario. Use different hardware. Or have another program writing at the same time (even to a different drive). Or have the caches start full instead of empty, or have the caches start empty instead of full. Use a faster or slower Flash drive. Do updates faster or slower. Then pull the Flash drive while it's being written to. Any of these can result in a corrupt database.

Sorry, but consumer-grade Flash drives, hardware failure (e.g. pulling the drive before the OS says its ready) and ACID don't mix. Pick any two.

10:48 Edit reply: Finding the SQLite version for a feature (artifact: af5070cdd1 user: slavin)

That actually works, and identifies

2005-03-21 (3.2.0)

Nice. I wish it worked for all features and significant improvements.

Given that the feature was added in 2005, and the iPhone was first released in 2007, I don't think you're going to have to have your users check it.

10:48 Edit reply: Finding the SQLite version for a feature (artifact: 4c894c1b59 user: slavin)

That actually works, and identifies

2005-03-21 (3.2.0)

Nice. I wish it worked for all features and significant improvements.

Given that the feature was added in 2005, and the iPhone was first released in 2007, I don't think you're going to have to have your students write code to check it. But mentioning that it was a concern and how you checked it out would be educational.

10:45 Reply: Finding the SQLite version for a feature (artifact: 71ee7053b7 user: slavin)

That actually works, and identifies

2005-03-21 (3.2.0)

Nice. I wish it worked for all features and significant improvements.

10:43 Reply: Stale data returned on subsequent select query (artifact: 912eaecd8e user: slavin)

I agree with Stephan Beal's reply. This can't happen with bare SQLite or with a SQLite shim which just gives access to the SQLite API. It must be something to do with the language-based library you're using.

Can I ask where you found the 'sleep of 1 secs' idea ? Can you point us to some documentation that mentions it ? I'm curious because there is no such idea in SQLite. SQLite has no concept of waiting a fixed time for a change to become visible. Knowing where it is implemented may point us to the cause of your problem.

03:00 Reply: One connection per function or one shared connection? (artifact: 3a1a61d780 user: slavin)

Separate connections would be slower, as you thought.

One difference between one connection and multiple connections is that one connection has one transaction. If your API can do only individual SQL change commands, with no manual BEGIN or COMMIT, there is no difference: SQLite will wrap each one in its own transaction. But if you use BEGIN or END then you should know that they will affect all commands sent via the same connection: all commands until the END will all be bunched up as part of the same transaction.

Whether you do or do not want this to happen might settle whether you should use separate connections or not.

10:50 Reply: column constraint ascending primary key with text type (artifact: 13919cd1c4 user: slavin)

It can't be done with two TRIGGERs. The method depends on another TABLE (let's call it 'Config') having a fixed row to keep track of 'last date recorded'.

A TRIGGER AFTER INSERT takes a copy of the new value and writes it to a fixed row of another TABLE. A TRIGGER BEFORE INSERT looks up the current value in the other TABLE and makes the INSERT fail if the new value is too low.

However, this depends on the TABLE you're inserting new rows into never having rows UPDATEd or DELETEd. It also means that your program would have to have an initialisation routine to create the Config table and the appropriate row in it.

12:18 Post: Subtle SQLite compliment (artifact: 5d24d35655 user: slavin)

Extract from https://paperless.blog/faux-idempotency:

The upshot is that a process being “idempotent” comes with a huge caveat: unless you’re working on extremely high quality code (I’m talking SQLite or Mars rover, not 100% test coverage and linted) there are probably many ways in and circumstances under which your code isn’t idempotent

12:24 Post: Minor fault in CLI documentation (artifact: 745f7e63b3 user: slavin)

In this section


there's a table which lists the options for the .expert command. The listing for the second option --sample allows a linebreak between the '--' and the 'sample'. Experienced programmers won't have a problem but less experienced once won't understand what it means.

I also note that, when reading the section as a naïve user, it wasn't obvious to me if I should be using --sample 100 or just leaving out that option entirely. Without understanding how SQLite works, I couldn't tell which was a good option to start with. Perhaps part of that section might be rewritten to correct that problem, assuming a not-huge database and an average desktop computer.

12:14 Reply: Why is this query so slow, but only in SQLite? (artifact: af2a1ad7c1 user: slavin)

You have not created any indexes to help speed up the query. And SQLite won't create permanent indexes itself because it doesn't know how much storage space you have to spare. Read this section:


Create the indexes it suggests, and run your query again.

By the way, if you haven't already used the Command Line Shell for SQLite (which is what that page is about) I recommend it. It's a vital tool for exploring your database, and learning SQLite without having to write your own program every time.

22:34 Reply: Distinguishing CREATE TABLE failure (already exists) from othe failures (artifact: ac4d4c9ec1 user: slavin)

But this does not agree with another post of yours in this thread where you say you want to know what issued the CREATE TABLE command.

You can find the exact CREATE TABLE command used (unless someone later used one of the ALTER TABLE commands to modify the schema of that table). Would that do what you want ? If not, what are you actually trying to do ?

19:14 Post: Why coding for all versions of C is hard (artifact: 261ec9ce2d user: slavin)

Those of you interested only in practical use and problems with SQLite please excuse me, and skip this post.

An interesting paper on ISO C was made available today:


How ISO C became unusable for operating systems development

Click 'PDF' link top right for the actual paper with no registration and no paywall.

Though the paper wasn't written with this in mind, it describes in detail how differences in C compiler strategies can lead to serious problems in resulting code. It relates to numerous posts on this forum complaining "My compiler generates warnings/errors when compiling SQLite." or "Your code fails static analysis because it misuses pointers.".

It's especially fun reading Dennis Ritchie criticising drafts of proposed C 'improvements'.

I was amused to notice that C documentation sets the tone for SQLite documentation. Not only are certain things undefined, but certain things are specifically declared as undefined, and implementations of C are required not to define them.

13:06 Reply: Is it possible to fix a more than 1G corrupted SQLite file with only the first 16K content corrupted? (artifact: beb674dc49 user: slavin)

This was my idea too. Working only on copies of the files, to prevent making the originals worse …

  1. Find an older version of the database which has become corrupted.
  2. Copy the first 16K of it.
  3. Overwrite the first 16K of the corrupted database with that copy.
  4. Open the resulting file in the SQLite command-line shell. Does it open ?
  5. If so, run an integrity_check on it. Does it pass ?
  6. Use the .recover command on it. Do you get anything useful back ?
17:10 Reply: feature request: vacuum into without indexes (artifact: 44ba6662b9 user: slavin)

I got it from the section on INSERT INTO … SELECT on this page:


To avoid a parsing ambiguity, the SELECT statement should always contain a WHERE clause, even if that clause is simply "WHERE true", if the upsert-clause is present.

Unfortunately, I see to have missed the bit about UPSERT. I thought it was true for all INSERT INTO … SELECT commands. Thanks for the heads-up.

15:01 Reply: feature request: vacuum into without indexes (artifact: 253a9cfca2 user: slavin)

You are correct that INSERT INTO ... SELECT * FROM ...; is optimized in certain ways.

The order in which rows are inserted is not documented. That means that even if I told you what the current version of SQLite did, it might change in future versions. However, if I understand the documentation properly, if you have an ORDER BY clause in your SELECT, this will be respected. Note that this might slow the process down, though.

The SELECT part of the statement should always have a WHERE clause, even if it's just WHERE true.

13:47 Reply: Regarding memory mapped i/o (artifact: 6bed871652 user: slavin)

You should read this page


Given your question, note especially

Performance does not always increase with memory-mapped I/O. In fact, it is possible to construct test cases where performance is reduced by the use of memory-mapped I/O.

If you have specific questions, please ask, giving us enough information about your setup to be able to make good guesses.

13:19 Reply: Distinguishing CREATE TABLE failure (already exists) from othe failures (artifact: 37bc8833b7 user: slavin)

You could use this PRAGMA:

PRAGMA table_list(table-name)

and check the number of rows returned. If the table exists you'll get one row back, if it doesn't you'll get zero rows back. If there's a problem with the connection you'll get a different result rather than SQLITE_OK. The fact that there are three results should let you distinguish between the different possibilities.


22:51 Reply: Query about backup API (artifact: 60bb0dc336 user: slavin)

Sorry, I couldn't possibly guess which is faster, or how that might change depending on whether your two databases are on different media. Too much depends on your hardware setup.

Also, it's difficult to assess the performance of the backup API because every time another process makes a change to the database it has to restart.

13:52 Reply: USE statement, or a way to set default schema (artifact: fa41692f3c user: slavin)

I'm not up on how SQLite works internally, but it seems likely that having to have every SQL statement look up which schema it should use for the default would slow down SQLite.

You should be able to create an empty database and open that as your 'main' database. Then you open 'real' databases using the ATTACH command, which allows you to set whatever schema names (within reason) you want for each one.

That way you'll never use the main database, which means the default schema has no meaning.

09:55 Reply: Query about backup API (artifact: 7a2b846893 user: slavin)

The backup API backs up the entire database. Not rows of data.

If you just want to add the rows of one table to the existing rows of another table you could attach the destination database to your in-memory connection, and then use

INSERT INTO destination.myTable (SELECT * from myTable)
08:15 Reply: Release db file (artifact: 109b2854b2 user: slavin)

Alex's post reminded me of another thing that can mess up file release: antivirus and anti-malware products. These can notice that a file has changed, then immediately open it to inspect it for bad patterns. So instead of your database-changing program which has the file open it might be something else.

Next time it happens use Process Explorer (or whatever it's called in your version of your OS) to see which program has the file open.

09:26 Reply: Release db file (artifact: 67bc22eaef user: slavin)

You do not have to do anything to 'quit' SQLite to close all the files it has open. If you have closed all your database connections, it should have closed all files and done attendant journal-file cleanup.

However, the sqlite_close() call sometimes doesn't really close the connections. One thing that stops them is statements which have not completed. To make sure this doesn't happen be sure to use sqlite_reset() or sqlite_finalize() on all statements you prepared with sqlite_prepare(). If you never use sqlite_prepare() and used sqlite_exec() instead, you don't have to worry about this.

There are similar problems with unclosed BLOB handlers, backup handlers, and probably others I've forgotten.

Depending on your journal mode, you may be able to figure out whether the connections were closed properly by checking to see if a journal file still exists. This can be convenient because it can be done by a shell script which cannot make SQLite calls. I've sometimes chosen journal modes specifically to allow this.

(Assume '_v2' sprinkled liberally in the above API calls. The above is simplified for brevity.)

18:19 Reply: SELECT with LIMIT OFFSET clause return unexpected result (with sample db and query) (artifact: 1ee4f83555 user: slavin)

I fail to reproduce the above in SQLite 3.36.0:

simon@183 Desktop % sqlite3 test.db
SQLite version 3.36.0 2021-06-18 18:58:49
Enter ".help" for usage hints.
sqlite> INSERT INTO "series" ("id","acqtime") VALUES (1,'2021-01-02T22:33:44');
sqlite> INSERT INTO "series" ("id","acqtime") VALUES (2,'2021-01-03T01:12:23');
sqlite> INSERT INTO "series" ("id","acqtime") VALUES (3,'2021-01-03T01:02:41');
sqlite> INSERT INTO "series" ("id","acqtime") VALUES (4,'2021-01-03T01:47:55');
sqlite> .mode table
sqlite> SELECT * FROM series;
| id |       acqtime       |
| 1  | 2021-01-02T22:33:44 |
| 2  | 2021-01-03T01:12:23 |
| 3  | 2021-01-03T01:02:41 |
| 4  | 2021-01-03T01:47:55 |
sqlite> SELECT DISTINCT date(series.acqtime)
   ...>          FROM series
   ...>         ORDER BY date(series.acqtime) DESC;
| date(series.acqtime) |
| 2021-01-03           |
| 2021-01-02           |
sqlite> SELECT DISTINCT date(series.acqtime)
   ...>          FROM series
   ...>         ORDER BY date(series.acqtime) DESC
   ...> LIMIT 1 OFFSET 400;

I get the expected zero-row result to the above SELECT whereas the OP says he gets four rows. Did I misunderstand the query ?

10:51 Reply: intterupted vacuum (artifact: 88795fc431 user: slavin)

My guess is that it's not resuming the VACUUM, it's trying to check/rescue data from your corrupt database using both the database file and any journal files. Since a lot of new data was written, and it all has to be read, the rescue takes a lot of time.

Make sure you have at least twice as much free space on the volume as the database file takes up. Use any SQLite program to open the corrupted database and let the rescue procedure take as long as it takes. Then allow the program to quit normally. You would not lose any data if this is allowed to proceed.

Do not use any more VACUUM commands until opening the database takes the normal amount of time, and you have a lot of free space on that volume.

01:52 Reply: Optimize sqllite database (artifact: 25e0e5c4e7 user: slavin)

… or perhaps doesn't like the fact that many columns of the last row are blank instead of holding the values which obviously should be there.

22:49 Reply: Optimize sqllite database (artifact: e8b2dd0aae user: slavin)

I downloaded your database. I can't run the software included with it, but I opened the included database in the SQLite command line shell.

The database passes the integrity-check. The longest table has 1757390 rows. It displays without problems in the SQLite command line shell. The last row in the table has id_jim = '1757390' and the other columns are blank, suggesting something went wrong with the input procedure.

There's nothing wrong with the data. if the program you're using is doing something strange, the problem is with the program. I suspect it wasn't designed to handle a table with 1757390 rows in.

23:30 Reply: recommendation for reclaiming the db when disk is full (artifact: b5e53e38cb user: slavin)

Backups are only about a rescue. You shouldn't need to rescue. If your device requires a rescue it is a failing device and you are a bad designer. The device should not fail in the first place, which is the (2) you described.

This is not specific to SQLite, it's a general problem with any storage device on any computer. If you let the device get full you need human interaction to proceed. Because a computer-contolled process of releasing space requires temporarily using more space. So it will fail, because there is no more space to use. And then your only way to solve the problem is to have a human do it.

Because of this you don't let a storage device get completely full. You write your program, or write another special monitoring program, so that it spots that the device is nearly full and does something about it.

Otherwise your device will hang or crash, and rebooting it will not cure the problem, because as soon as the device is booted it writes to a log file, and one day that writing will fail because storage for the log file is full.

Programming is simple. It can be done by idiots, and often is. Designing something which works long-term requires intelligence and experience. Experience is what you're getting now. This is what makes the difference between earning $20,000 a year and earning $80,000 a year.

07:38 Reply: recommendation for reclaiming the db when disk is full (artifact: 9128096514 user: slavin)

I answer the problem you said you had in two parts:

Copy your database and make sure your copy is not corrupt.

You can't do anything useful on a computer with full memory. If you are using an 'in memory sqlite3 database' then SQLite is only one contributor to it getting a 'disk full error'., You have a number of things taking up memory. Make one of them take up less memory until you have some free memory to rescue yourself from your situation. Once you have done that you can proceed.

Can your embedded device use anything but memory for storage ? For instance can it address a Flash drive plugged into it ? If so, make a copy of the database and any associated files SQLite has made (any file in the same folder which has a similar name) on the Flash drive. Take the Flash drive, plug it into a proper computer, then use that computer to check and correct the database. Once it is uncorrupted, do something to replace the 'in memory' database with this new uncorrupt copy.

If your device has no way to address external storage you're going to have to write C code to do something equivalent. Unless your device can run the SQLite command line shell tool, which seems unlikely on an embedded device.

Prevent this happening in future.

Just as you wrote. Figure out how much memory you'd need to get yourself out of a 'full memory' situation, double it, and write some code that monitors the amount of free memory and does something when free memory drops below that amount. This is not specifically to do with SQLite, it's something you do with any computer which continually uses up more memory. Since the monitoring process would not involve SQLite (though the part which frees up memory might), I have no advice for you on how to do it.

If your embedded device is logging data into the SQLite database, or doing anything else which will continually make the database bigger, you should think through how to have this happen in an orderly and convenient way. Alternatively write your database to storage, presumably Flash storage, rather than memory.

13:49 Reply: Linear algebra extention to core SQLite (artifact: 60aab08536 user: slavin)

Represent your matrices in JSON format, and write a bunch of functions to manipulate them. It won't be fast but it'll work and it doesn't require any changes to SQLite.

But yes, to do it properly you should introduce a MATRIX type or something like that. Nice project for a maths postgraduate.

03:11 Reply: autovacuum in sqlite VS autovacuum in PG (artifact: 31d0bfbb73 user: slavin)

(I'm ignoring BLOBs and other weird things here to give a simple answer.)

Updating any part of a row can require that the row be rewritten to file. Depending on whether the row takes up more or less space, and on what lengths of free spare are available, the row may or may not be re-written in place. Because SQLite keeps pending updates in a journal file, not in the main database file, the amount of time that both the old and new versions of a row are in the main database file is very short. Immediately after that, the space taken up by the old version of the data is marked as free, and may be reused by future updates or freed up by VACUUMing.

Updates can release space just as well as deletes. Rows can take up less table space, freeing up filespace. But DELETE FROM frees up space in indexes. An UPDATE doesn't do that.

Given the questions you ask I will mention that there's a trade-off here. You can write your database engine to keep the file size as small as possible, reclaiming the maximum of space at all times and releasing filespace as soon as possible. But all this processing and input/output can take up time, making operations slow, and can wear out SSD storage by making numerous unnecessary changes. For instance, in a journaling system updating a single row of data by changing values which appear in two indexes could require 12 writes ! That'a a lot of work.

In some situations it's better to just mark filespace as 'unused, available for reuse' and worry about releasing filespace only when a programmer explicitly does a VACUUM.

16:53 Reply: Data loss after iOS device force restarts (artifact: 307beed31b user: slavin)

You don't also need both sqlite3_finalize() and sqlite3_reset(). But I suppose that might be an artefact of writing test code rather than a real program.

You have a BEGIN in there but no END or COMMIT. This means you're explicitly starting a transaction but not explicitly ending it. If your code executes BEGIN each time around the loop it should also have an END each time around the loop. Presumably between sqlite3_finalize() and sqlite3_close(). This is the transaction problem other people asked you about. Here is the equivalent of what you're doing:

simon@183 Desktop % sqlite3 test.db
SQLite version 3.36.0 2021-06-18 18:58:49
Enter ".help" for usage hints.
sqlite> BEGIN;
sqlite> BEGIN;
Error: cannot start a transaction within a transaction
sqlite> END;
sqlite> END;
Error: cannot commit - no transaction is active
sqlite> CREATE TABLE t (a INT);
sqlite> BEGIN;
sqlite> INSERT INTO t VALUES (77);
sqlite> SELECT * FROM t;
sqlite> .quit
simon@183 Desktop % sqlite3 test.db
SQLite version 3.36.0 2021-06-18 18:58:49
Enter ".help" for usage hints.
sqlite> .tables
sqlite> SELECT * FROM t;
sqlite> BEGIN;
sqlite> INSERT INTO t VALUES (88);
sqlite> SELECT * FROM t;
sqlite> END;
sqlite> SELECT * FROM t;
sqlite> .quit
simon@183 Desktop % sqlite3 test.db
SQLite version 3.36.0 2021-06-18 18:58:49
Enter ".help" for usage hints.
sqlite> .tables
sqlite> SELECT * FROM t;
sqlite> .quit
simon@183 Desktop % 

Given the above problem, one thing you should not be seeing is a corrupt database – one which gi. SQLite should be recovering a usable database whether or not your last transactions is in it.

09:41 Reply: Data loss after iOS device force restarts (artifact: 9de37c53bb user: slavin)

Thanks for your reply which may help people explain what's happening.

The use of power+home buttons is equivalent to the iPhone's OS crashing (which almost never happens) or your program crashing (which almost never happens because you're a good programmer). Under these circumstances your program will just stop running, leaving journal files intact if they have been synced to storage. It is the hardest condition for SQLite to recover from, but SQLite is designed to cope with it. Once the database is opened again, data should be recovered to a state after the last committed transaction. If you are losing data from a committed transaction something may be going wrong, so please pursue this thread with details.

In contrast, an iPhone is designed to notify programs of low battery in plenty of time for them to save uncommitted data and close database connections gracefully. This should not cause you a problem.

09:32 Reply: autovacuum in sqlite VS autovacuum in PG (artifact: 1be1a5d418 user: slavin)

Let me be the first to ask what PG is. I'm going to assume PostgreSQL. I don't know enough about PostgreSQL to discuss it.

SQLite autovacuum can shrink the file. Since you ask, here's the short simplified version.

SQLite database files are made up of pages: blocks of a fixed length. Each page is devoted to a specific table or index. It is possible that an entire page would be released (e.g. DELETE FROM deleting lots of rows of a table, which would also delete lots of entries of an index). It is also possible that part of a page would be released (e.g. DELETE FROM deleting one just row, which would also delete an entry in an index). to release the maximum amount of filespace, you need to worry about both kinds of unused space. To find out how autovacuum does this, read this:


However, auto_vacuum can introduce unpredictable delays in operations, and this may be inconvenient to your program. Instead you might prefer to do a VACUUM at a time when a delay doesn't matter. VACUUM reconstructs the entire database (ignoring freed space) from the data in tables. This means that not only can it free up filespace but it can also get rid of corrupted index data.

Come back to us if you have specific questions.

17:24 Reply: Data loss after iOS device force restarts (artifact: de247d2471 user: slavin)

In addition to the other posts asking whether you're using a transaction …

Which language are you programming in ? Which SQLite library are you using (or are you using the C API directly) ? Does your application respond to the 'quit' notification by closing your SQLite connection, or does it just quit ?

When you say "forced restart" are you talking about telling the iPhone to turn off, or are you using the iPhone interface to force-quit just your application ? Are you trying to simulate what would happen if the iPhone lost power, or are you testing for some other condition ?

12:18 Reply: Failing builds on Apple M1 due to CPU arch and GNU Coreutils (artifact: 992dff132f user: slavin)

It seems your problem is with homebrew, not SQLite. The SQLite team's recommended way to build SQLite is here:


You can build SQLite projects with homebrew if you like, but if you have problems you're going to need someone who understands homebrew.

09:43 Reply: Reading SQLite database by using C Language (artifact: 43935bc7fb user: slavin)

It looks like you're trying to run the database file rather than the program you compiled. Your copy of Linux/Unix is unable to run that file, since it – correctly – isn't marked as executable.

What command are you typing at your bash prompt ?

09:53 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: 458af611da user: slavin)

Yeah, you're right. My post was a mess and I didn't understand your C code properly. I'm sorry about that.

I'm hoping that Ryan's has put his finger on the problem.

03:49 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: 4582af9296 user: slavin)

It looks like you said you're checking it, but your code shows you checking the errors from sqlite_open_v2(), not the sqlite3_exec() on ATTACH.

What does sqlite3_extended_errcode() return when the ATTACH fails ? Is it SQLITE_OK ?

02:54 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: ee9e157cd3 user: slavin)

Does your program have permission to create new files in that folder ? Try making it create a new text file in the same folder.

17:46 Reply: Question about sqlite backup api (artifact: c1408ac3c3 user: slavin)

As Larry wrote, the backup API represents the contents of the database. Not what appears in the database file on disk.

01:52 Reply: try to insert dollar sign and percentage chars (artifact: ba7d2b73c0 user: slavin)

This is a problem with your shell, which I assume is bash, not with SQLite. SQLite has no problem with any character except for the single apostrophe, which is the character used to enclose text. bash uses dollar signs to indicate variables.

00:35 Reply: How SQLite database can be survived from a power failure during extending the page count ? (artifact: 239fbf8f3a user: slavin)

SQLite does numerous things to make sure that it will lose data as seldom as possible, and yield a corrupted database even less often. Your suggested change would make sense in a simple setup (e.g. a text file) but SQLite keeps change data in a journal file until it is sure it has updated the database file. A journaling file system can help. But SQLite just by itself is unusually resistant to many forms of hardware failure.

You might like to read this page


and follow some links, then think through what would actually happen after a power failure. If you still have questions after that, please don't hesitate to reply to this thread.

04:40 Reply: Pass a directory variable to .read? (artifact: 34cc22aa43 user: slavin)

With regard to 'a better way' in your post, I would recommend you learn how environment variables work in whatever shell you're using. You may be able to do things like

export SQLDIR="data"
echo "Working in directory $SQLDIR ."
sqlite3 test/$SQLDIR/test_people.db < $SQLDIR/write_people_db.sql

This example would work only in bash. Other shells may use a different format for setting and referring to variables. Read up on whatever shell you're using.

03:49 Reply: Selecting certain sub string (artifact: d2bbccf9fe user: slavin)

There is no simple function, or simple combination of functions, in SQLite which can do what you want.

You could write such a function in your favourite language, and get SQLite to load it:


Note that your favourite language library or shim make have a version of sqlite3_create_function() that looks more like that language and would be easier for you to use.

But frankly I'd just have SQLite return that entire pathfilename string and process it in my software..

16:42 Reply: Distributed backups from a database of databases (artifact: 8673df4f53 user: slavin)

A closed SQLite database all fits in one file. So just write code to read that file and store it however you want to store it, probably as a sequence of octets. Be certain that the file is closed - there are no open connections to it.

An alternative way to do it is to have SQLite serialize the database:


This doesn't need the database to be closed, but it takes up a lot of memory.

By the way, instead of storing the SQLite databases in a MySQL database, had you considered storing the SQLite databases in a SQLite database ?

00:03 Reply: Rebind for NULL values (artifact: 7bc5e0cf5e user: slavin)

You ran into the specific meaning of NULL which was used when SQL was devised. It does not match the value 'null' as used in C. So please think of NULL has having a specific useful meaning. A bit like zero being a number, but not normally seen in the 'number of items' field.

So you can bind a parameter to null in the C sense. Or you can bind the same parameter to NULL in the SQL sense. And they should do two different things.

21:58 Reply: \n breaks reading from SQLITE3 (artifact: c91fd41791 user: slavin)

Okay. So you're talking about the SQLite3 Command Line Shell, not sqlite3 itself.

You can change the characters that program uses using various dot commands. See the documentation at


13:33 Reply: \n breaks reading from SQLITE3 (artifact: 180b7c0cf7 user: slavin)

I'm confused about what you mean when you write "everything breaks". What is broken in what you just showed us ?

If you put a return character inside one of the text fields. You should find that it gets converted to a return character and will be shown as such in the output. You do need to be very sure that some part of your input process converts the two characters 'n' into a return character. There is no part of SQLite which does this, it has to be done in your code.

it might help you understand what you're seeing if before the SELECT line you enter the command '.mode table'. I'm not sure how you'd to that in whatever Linux shell you're using, but it should work if you type it in an interactive session.

00:28 Reply: \n breaks reading from SQLITE3 (artifact: f459347f15 user: slavin)

SQLite has no problem with the newline character inside test strings. It doesn't even notice that its an unusual character.

You have one of two problems. Either the correct string is not being stored in the database, or the correct string is being stored, but your process of reading it back out is returning an incorrect string to your program. You can use the SQLite Command Line Shell to find out which of these is happening. The Command Line Shell shows the strings exactly as they are stored in the database.


How are you calling SQLite ? Are you using the C API ? Or are you using a library for your favourite programming language ? If so, which language and library are you using ?

17:14 Edit reply: Trying to "Open Database" Safari History.db file and get Error: "could not open safari database file reason: unable to open.. (artifact: c20230743f user: slavin)

Works find for me when I use the Command Line Shell provided by the sqlite3 team.

Quit Safari. Copy the History.db file somewhere else. (You can restart Safari now if you want.) Open the copy you made with the sqlite3:

simon@183 Desktop % sqlite3 History.db
SQLite version 3.36.0 2021-06-18 18:58:49
Enter ".help" for usage hints.
sqlite> .mode table
sqlite> SELECT * FROM history_items LIMIT 3;
|   id   |                  url                  | domain_expansion | visit_count | daily_visit_counts | weekly_visit_counts | autocomplete_triggers | should_recompute_derived_visit_counts | visit_count_score | status_code |
| 201295 | https://slashdot.org/                 |                  | 7197        | ?                  | a???                |                       | 1                                     | 5100              | 0           |
| 201315 | https://news.ycombinator.com/         | news.ycombinator | 4837        | ?                  |                     |                       | 1                                     | 5100              | 0           |
| 201321 | https://www.reddit.com/               | reddit           | 11152       | }                  | `	             |                       | 0                                     | 5920              | 0           |
sqlite> .quit
More ↓