SQLite Forum


50 forum posts by user dvdraymond occurring on or before 2021-02-25 12:37:06.

More ↑
12:37 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: 0d8a78408f user: dvdraymond)

Well... the preview doesn't look very readable; what can I do?

At the top of the text box there's a drop down for what style to use. You can always change it to Plain Text to get rid of any fancy behavior.

If you click on the words "Markup style:" there, and then on "Markdown Formatting Rules" you get a guide of all the Markdown stuff. There's a section on "Literal/Verbatim Text - Code Blocks"

The most basic way to mark a section as without style is to use backticks before and after. Triple backticks seems best when going around a big block.

#include <sqlite3.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, const char * argv[])
  sqlite3 *db;
  if (sqlite3_open("the_insert.db", &db))
    printf("Could not open the_insert.db\n");
  printf("Database %s is open\n", "the_insert.db");
  // Prepare a statement for multiple use:
  // =====================================
  const char *query = "INSERT INTO someTable (second, third) VALUES (?,?)";
  int sqlSize = (int)strlen(query) + 1;
  sqlite3_stmt *stmt;
  int err = sqlite3_prepare_v2(db, query, sqlSize, &stmt, NULL);
  printf("err: %d\n", err);
  if (sqlite3_prepare_v2(db, query, sqlSize, &stmt, NULL))
    printf("Error executing prepare statement: \n");  //, sqlite3_errstr());

  printf("prepared\n"); // I NEVER GET THIS FAR!
   * The prepared statement executed multiple times
   * ==============================================
  for (...) {
      sqlite3_bind_int (stmt, 1, ...);
      sqlite3_bind_text(stmt, 2, ...);

       if (sqlite3_step(stmt))
          printf("execution failed\n");
          sqlite3_finalize(stmt);         // Only when things go wrong
          return ;
  // Get rid of the memory allocated for stmt; mandatory!
  sqlite3_close(db);  return 0;

14:10 Reply: Update Query (artifact: 946aa3b2aa user: dvdraymond)

Not sure what you mean by schema...

We're saying show us the layout of the database. At the most straightforward level we're looking for the "CREATE <blah>" statements, so we can see the tables, their fields' names and declared types, any indexes, etc etc.

If you're using the CLI you can just run .schema

Or you can query it out of the sqlite_master table.

13:59 Reply: Stuck with 3.22 and no docs ... (artifact: b3afdd6564 user: dvdraymond)

Other less official alternative: The URLs on the Download Page are pretty consistent, and stick around. You can usually look up what year a release was in the Release History Page (in this case 2018 for 3.22.0), alter the current URL to match, and get what you're looking for.

Right now in the Downloads page the URL for the docs is: https://www.sqlite.org/2021/sqlite-doc-3340100.zip

So change the year and version and we've got: https://www.sqlite.org/2018/sqlite-doc-3220000.zip

...which downloads ok.

13:57 Reply: SQLite database in space? (artifact: f52bd8b6d6 user: dvdraymond)

In the first minute of the opening introduction for this YouTube video of a CMU lecture by Dr Hipp it's mentions that it's used in space, and includes Dr Hipp saying he didn't know it was in space at the time :)

D. Richard Hipp - SQLite [The Databaseology Lectures - CMU Fall 2015]

14:41 Reply: statement subtransaction journal (artifact: aac368a9a5 user: dvdraymond)

Did you read the description here?

Temporary Files Used By SQLite

16:42 Reply: Slow select operation after many times of insert different Table (artifact: 4e9c17a3c5 user: dvdraymond)

Python's default sqlite3 module is the part that does the implicit transaction stuff I'm talking about, yes.

I'm not referring to the SQLite C library/API/etc itself.

16:24 Reply: Slow select operation after many times of insert different Table (artifact: 04cc85624f user: dvdraymond)

Not sure. I did a couple test runs on my own system and didn't get anywhere near as big of a difference. So I definitely might be wrong on what's causing it.

15:45 Reply: Slow select operation after many times of insert different Table (artifact: 58887d61cc user: dvdraymond)

No, with the way he opened the connection it's doing implicit transactions. So once you start doing selects, inserts, etc you have an open transaction until you explicitly commit or rollback. So his inserts are fine. Also, the timing was only done on the select part of it, and the insert time wasn't included at all.

13:18 Reply: Slow select operation after many times of insert different Table (artifact: 81539e94a4 user: dvdraymond)

My guess as to what's going on:

For db1 you're shuffling all the pages of the three tables together as you build the file. As you evenly deal out the inserts between then, then as the db file grows it's basically going to be laid out as page for Test1, page for Test2, page for Test3, page for Test1, page for Test2, page for Test3, etc... So when you read everything from Test1, you need to read every third page over the entire length of the file.

When you do the inserts into db2 and db3, you're keeping each table together. So as the db file grows you have a big swath of pages all for Test1, then a big swath of pages all for Test2, then a big swath of pages all for Test3. So when you read all of Test1, you're basically reading all continuous pages in the first third of the file.

That's going to make a difference, especially if you have a spinning hard disk for the file. Whether that's the difference you're seeing I'll let the more knowledgeable folks reply.

20:14 Reply: sqlite bindings error (artifact: ec4b16cad6 user: dvdraymond)
See Keith's response.

But also, are you sure you copied and pasted everything correctly? I swear this should complain about only having 1 supplied.

.executemany(sql, (values,))

should basically run .execute(sql, values) once, since (values,) is a 1 element tuple.

And "values = (btc_price_rows,)" means values itself is also a 1 element tuple.
So shouldn't it complain about only having 1 binding supplied?

I would bet 19931624 is the size of r.text, and you did either

.executemany(sql, (btc_price_rows,))

or you did values = btc_price_rows

or something similar.
14:04 Reply: SAVEPOINTS (artifact: a697279164 user: dvdraymond)

Maybe because it's early in the morning, but when I read the


my brain kind of froze and I had to look up CASE again to get what you were doing. I'm curious as to why you would write it that way rather than



It seems non-intuitive and works only in SQLite because of TRUE becoming 1, and wouldn't work in any DBMS which has a dedicated boolean type.

14:46 Reply: Recover Database (artifact: 0bbf6e94a4 user: dvdraymond)

You can see the method for the journal file in the Atomic Commit In SQLite page.

The short version is that the -journal file is a backup of the original file contents in case something goes wrong during the transaction, or you decide to rollback.

13:48 Reply: pending lock macro (artifact: 4b111a6475 user: dvdraymond)

I believe it's still the same byte, and that when you say to the OS "lock byte 1 billion" and the file is less than that size, then the OS just shrugs and says "ok. If you ever get there it's locked." It does not expand the file out on disk or in memory to be 1 billion bytes just so it can lock byte 1 billion.

13:19 Reply: Query data between dates (artifact: 28b5256dcf user: dvdraymond)

From the Quirks page

3.2. No Separate DATETIME Datatype

SQLite has no DATETIME datatype. Instead, dates and times can be stored in any of these ways:

  • As a TEXT string in the ISO-8601 format. Example: '2018-04-02 12:13:46'.
  • As an INTEGER number of seconds since 1970 (also known as "unix time").
  • As a REAL value that is the fractional Julian day number.

The built-in date and time functions of SQLite understand date/times in all of the formats above, and can freely change between them. Which format you use, is entirely up to your application.

Also see Date And Time Functions

15:39 Reply: Primary Key v Unique Index (artifact: 36cdde8591 user: dvdraymond)

Consider that "integer primary key" gives a name to something that's already there for the table to begin with. So including the integer primary key definition at the start isn't going to slow it down in the same way as adding the other indexes at the start vs at the end.

(Though, if I recall correctly, with an "integer primary key", if you insert your IDs in random order it'll be a little slower than without the "integer primary key", as without it the rowids can just be assigned in insertion order, meaning less Btree manipulation as you go. But if you insert your "integer primary key" data in primary key order then it'll be the same, and save you the cost of building the extra index later)

18:12 Reply: Use of RETURNING clause as a subquery (artifact: dbd3d46a23 user: dvdraymond)

Just as a note this query will also give an error in PostgreSQL as well:

insert into t0log (whenCreated,what) with stuffer as (INSERT INTO t0(c) VALUES(random()) RETURNING b,c) select b,c;

testing=> insert into t0log (when_created, what) with stuffer as (insert into t0 (c) values (random()) returning b, c) select b, c; ERROR: WITH clause containing a data-modifying statement must be at the top level LINE 1: insert into t0log (when_created, what) with stuffer as (inse... ^ Time: 13.069 ms

And needs to be re-written with the WITH part first.

testing=> with stuffer as (insert into t0 (c) values (random()) returning b, c) insert into t0log (when_created, what) select b, c from stuffer; INSERT 0 1 Time: 0.934 ms

14:04 Reply: COLLATE - Multiple? (artifact: 1a6cfe8e57 user: dvdraymond)

Basically you define your own collation that does both. I believe collations are basically just a function you give the two strings, and it returns whether A is less than B, B is less than A, or they're equal. They're not transformations applied to the strings before they're compared "normally".

But since rtrim is a function I believe an easier way to do this is

SELECT * from myTable where rtrim(name) = 'ajay' collate nocase;

16:33 Reply: 2 queries asking the same, give different answers (artifact: 011a415f76 user: dvdraymond)

What is the query plan for each of the three queries? Do they all use the same index?

Alternatively, by chance, is there a mix of numeric and textual data in fld0? And what is fld0's declared type? Since text is "greater than" numeric, do you have the case where the max for the whole table is a string, whereas the max for only a limited subset is a number?

19:36 Reply: Can't migrate auto-increment bigint columns (artifact: 7db931db6f user: dvdraymond)

There is nothing special about 'bigint'. It simply contains 'int'.

The problem is when it's combined with AUTOINCREMENT. That's where it needs a specific type, and "averybitint", or "bigint" won't cut it and it must be "integer".

16:00 Reply: Possible Bug - View behaves differently to Table in SELECT (artifact: e6d5dec579 user: dvdraymond)

Well, searching in the amalgamation that error message only shows up in one place, in the selectExpander function. So my complete amateur guess is that it looks like while the definition of the view is fine, when you expand all the view definitions out, the "model" table gets referenced more than 65,535 times, which would probably overflow a specific counter somewhere. Since they're explicitly checking for it I'd consider it a "designed-in constraint", but again, this is just an amateur guess.

** This routine is a Walker callback for "expanding" a SELECT statement.
** "Expanding" means to do the following:
**    (1)  Make sure VDBE cursor numbers have been assigned to every
**         element of the FROM clause.
**    (2)  Fill in the pTabList->a[].pTab fields in the SrcList that
**         defines FROM clause.  When views appear in the FROM clause,
**         fill pTabList->a[].pSelect with a copy of the SELECT statement
**         that implements the view.  A copy is made of the view's SELECT
**         statement so that we can freely modify or delete that statement
**         without worrying about messing up the persistent representation
**         of the view.
**    (3)  Add terms to the WHERE clause to accommodate the NATURAL keyword
**         on joins and the ON and USING clause of joins.
**    (4)  Scan the list of columns in the result set (pEList) looking
**         for instances of the "*" operator or the TABLE.* operator.
**         If found, expand each "*" to be every column in every table
**         and TABLE.* to be every column in TABLE.
static int selectExpander(Walker *pWalker, Select *p)
    Table *pTab;
      /* An ordinary table or view name in the FROM clause */
      assert( pFrom->pTab==0 );
      pFrom->pTab = pTab = sqlite3LocateTableItem(pParse, 0, pFrom);
      if( pTab==0 ) return WRC_Abort;
      if( pTab->nTabRef>=0xffff ){
        sqlite3ErrorMsg(pParse, "too many references to \"%s\": max 65535",
        pFrom->pTab = 0;
        return WRC_Abort;


** The schema for each SQL table and view is represented in memory
** by an instance of the following structure.
struct Table {
  u32 nTabRef;         /* Number of pointers to this Table */

14:36 Reply: How to cancatenate all lines in table to a single string? (artifact: 762fcdc71a user: dvdraymond)

Just a note that I do not believe that you can explicitly order group_concat() in SQLite. So if you need it to be in a specific order: 'Line1 Line2 Line3' then you'll have to use something else as you might get 'Line3 Line2 Line1'. If arbitrary ordering is fine, then yes, use group_concat()

13:58 Reply: Difference on NULL when WITHOUT ROWID (artifact: 92870d491a user: dvdraymond)

Without rowid tables must have non-null primary key fields.


Section 2.4:

NOT NULL is enforced on every column of the PRIMARY KEY in a WITHOUT ROWID table. This is in accordance with the SQL standard. Each column of a PRIMARY KEY is supposed to be individually NOT NULL. However, NOT NULL was not enforced on PRIMARY KEY columns by early versions of SQLite due to a bug. By the time that this bug was discovered, so many SQLite databases were already in circulation that the decision was made not to fix this bug for fear of breaking compatibility. So, ordinary rowid tables in SQLite violate the SQL standard and allow NULL values in PRIMARY KEY fields. But WITHOUT ROWID tables do follow the standard and will throw an error on any attempt to insert a NULL into a PRIMARY KEY column.

18:55 Reply: Row count (artifact: 728c66909a user: dvdraymond)
Using window functions this might be something like this

select idx, data, row_number() over (partition by data order by idx) || ' / ' || count(*) over (partition by data) as count from tmp order by idx;

I'm assuming that if the data bits are non-contiguous that it should be overall ranks and counts, and not just within that streak.

That is, I assume you want this:
| idx | data | count |
| 1   | a    | 1 / 4 |
| 2   | a    | 2 / 4 |
| 3   | a    | 3 / 4 |
| 4   | b    | 1 / 6 |
| 5   | b    | 2 / 6 |
| 6   | b    | 3 / 6 |
| 7   | b    | 4 / 6 |
| 8   | b    | 5 / 6 |
| 9   | c    | 1 / 3 |
| 10  | c    | 2 / 3 |
| 11  | a    | 4 / 4 |
| 12  | b    | 6 / 6 |
| 13  | c    | 3 / 3 |

... rather than ...

| idx | data | count |
| 1   | a    | 1 / 3 |
| 2   | a    | 2 / 3 |
| 3   | a    | 3 / 3 |
| 4   | b    | 1 / 5 |
| 5   | b    | 2 / 5 |
| 6   | b    | 3 / 5 |
| 7   | b    | 4 / 5 |
| 8   | b    | 5 / 5 |
| 9   | c    | 1 / 2 |
| 10  | c    | 2 / 2 |
| 11  | a    | 1 / 1 |
| 12  | b    | 1 / 1 |
| 13  | c    | 1 / 1 |
18:22 Reply: sqlite opens a database file in R/W mode, even if the file is read-only (artifact: 42cf8e985b user: dvdraymond)
I'm not seeing this behavior on my own copies of the Windows CLI tools. I get an error if I use a backslash instead of a forward slash, but otherwise opens just fine in old versions.

A    R               C:\Data\Programs\SQLite\db\test2.db


SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open .\db\test2.db
Error: unable to open database ".db     est2.db": unable to open database file
sqlite> .open ./db/test2.db
sqlite> .tables
sqlite> .databases
main: C:\Data\Programs\SQLite\db\test2.db
sqlite> .exit

SQLite version 3.34.0 2020-12-01 16:14:00
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open ./db/test2.db
sqlite> .databases
main: C:\Data\Programs\SQLite\db\test2.db r/o
13:55 Edit reply: bad plan for a query with an OR condition (artifact: 8ba73575d5 user: dvdraymond)
But you made x different for each record. Make it the same and see what happens:

psql (12.0)
Type "help" for help.

testing=> create temp table t (x int, c1 boolean, c2 boolean);
Time: 115.133 ms
testing=> insert into t values (1, false, false), (1, false, true), (1, true, false), (1, true, true);
Time: 0.954 ms
testing=> select x from t where c1 or c2 order by 1;
(3 rows)

Time: 59.461 ms
testing=> select x from t where c1 union all select x from t where c2 order by 1;
(4 rows)

Time: 0.443 ms
testing=> select x from t where c1 union select x from t where c2 order by 1;
(1 row)

Time: 11.495 ms
13:50 Reply: bad plan for a query with an OR condition (artifact: f463bc95cf user: dvdraymond)
But you made t different for each record. Make it the same and see what happens:

psql (12.0)
Type "help" for help.

testing=> create temp table t (x int, c1 boolean, c2 boolean);
Time: 115.133 ms
testing=> insert into t values (1, false, false), (1, false, true), (1, true, false), (1, true, true);
Time: 0.954 ms
testing=> select x from t where c1 or c2 order by 1;
(3 rows)

Time: 59.461 ms
testing=> select x from t where c1 union all select x from t where c2 order by 1;
(4 rows)

Time: 0.443 ms
testing=> select x from t where c1 union select x from t where c2 order by 1;
(1 row)

Time: 11.495 ms
15:46 Reply: bad plan for a query with an OR condition (artifact: d05e658556 user: dvdraymond)
> To be really equivalent I should have used UNION.

Nope. Even that is not equivalent. See below.
"where a or b" returns 3 rows
"where a union all where b" returns 4
"where a union where b" returns 2

SQLite version 3.34.0 2020-12-01 16:14:00
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.

sqlite> create table foo (a, b);

sqlite> insert into foo values (1, 1), (1, 0), (0, 1);

sqlite> .eqp on

sqlite> select a from foo where a or b;

sqlite> select a from foo where a union all select a from foo where b;
   |  `--SCAN TABLE foo
      `--SCAN TABLE foo

sqlite> select a from foo where a union select a from foo where b;
   |  `--SCAN TABLE foo
      `--SCAN TABLE foo

18:33 Reply: ValueError: parameters are of unsupported type (artifact: c6722a8d57 user: dvdraymond)
Nonetheless for debugging purposes I'd do something like

insertValues = {
'f1', Clnt_Num, 
'f2', client_name.get(),
'f3', address_name.get(),
'f4', city_name.get(),
'f5', state_name.get(),
'f6', zipcode_name.get(),
'f7', email_name.get(),
'f8', IP_1_name.get(),
'f9', IP_2_name.get(),
'f10', Keys_ordered_name.get(),
'f11', Order_number_name.get(),
'f12', Date_ordered_name.get(),
'f13', Agent_code_name.get(),
'f14', Client_logo_name.get(),
'f15', Billing_mth_name.get()
for key, value in insertValues.items():
    print(key, type(value), value)

cur.execute("INSERT INTO NiCE_clients VALUES (:f1, :f2, :f3, :f4, :f5, :f6, :f7, :f8, :f9, :f10, :f11, :f12, :f13, :f14, :f15)", insertValues)

And then paste the results, along with the full exception text.
18:00 Reply: ValueError: parameters are of unsupported type (artifact: ab81866069 user: dvdraymond)

So what makes you sure it's Clnt_Num that's the problem child? Have you checked the types of all the other 14 parameters to make sure they're something ok?

Also, you're gonna hit a problem at the end there where you have M's in comm, whereas you have N's in conn above. Also, what is "com" that you are closing?

17:23 Reply: Transactions involving multiple databases (artifact: 83b6da7318 user: dvdraymond)

I recommend going through the Atomic Commit In SQLite page. Section 5 is the Multi-file commit section.

17:59 Reply: Sqlite3 Trigger (artifact: c8808aded5 user: dvdraymond)

Is it an order of operations thing where it's parsing it as...

NEW.theorie AND (NOT (...)) >= 180


Try making it < 180, or adding some parenthesis.

The unary prefix operators are on their own line in Operators, so I'm not really sure where they stand in the list.

17:37 Reply: Support of unicode operators like ≠? (artifact: 381243556e user: dvdraymond)

Do any of PostgreSQL, MySQL, SQL Server, or Oracle support this?

For Postgres: 4.1.3. Operators

4.1.3. Operators

An operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following list:

+ - * / < > = ~ ! @ # % ^ & | ` ?

There are a few restrictions on operator names, however:

    -- and /* cannot appear anywhere in an operator name, since they will be taken as the start of a comment.

    A multiple-character operator name cannot end in + or -, unless the name also contains at least one of these characters:

    ~ ! @ # % ^ & | ` ?

    For example, @- is an allowed operator name, but *- is not. This restriction allows PostgreSQL to parse SQL-compliant queries without requiring spaces between tokens.

When working with non-SQL-standard operator names, you will usually need to separate adjacent operators with spaces to avoid ambiguity. For example, if you have defined a left unary operator named @, you cannot write X*@Y; you must write X* @Y to ensure that PostgreSQL reads it as two operator names not one.
14:17 Reply: trying to select one column from one table (artifact: 2297d5e766 user: dvdraymond)

As to why it's not giving you an error, it's because double quoted string literals are accepted

Despite using double quotes, since there is no column named "Interior" it says "well, I guess you meant that as a string literal", and returns it as a string literal.

11:54 Reply: I can't get the foreign_keys pragma to "stick" (artifact: 42ce8ad340 user: dvdraymond)

There's a compile time option to set the default to on. But as Ryan said, every connection is its own thing, and you can't force it upon someone else's connection.

12:51 Reply: A suboptimal solution - not using the index. (artifact: c94c81aadc user: dvdraymond)
Remember that casting can do weird things, so whatever you try, make sure it's doing what you want.

sqlite> create table t1 (a text unique);

sqlite> create table t2 (b integer);

sqlite> insert into t1 values ('1'), ('1.0'), ('1xyz');

sqlite> insert into t2 values (1);

sqlite> select * from t2 left outer join t1 on a = b;

sqlite> select * from t2 left outer join t1 on a = cast(b as text);
`--SEARCH TABLE t1 USING COVERING INDEX sqlite_autoindex_t1_1 (a=?)

sqlite> select * from t2 left outer join t1 on cast(a as int) = b;
15:40 Reply: Not Equal Operator (artifact: 341525eab9 user: dvdraymond)

You can also use IS, and IS NOT.


The IS and IS NOT operators work like = and != except when one or both of the operands are NULL. In this case, if both operands are NULL, then the IS operator evaluates to 1 (true) and the IS NOT operator evaluates to 0 (false). If one operand is NULL and the other is not, then the IS operator evaluates to 0 (false) and the IS NOT operator is 1 (true). It is not possible for an IS or IS NOT expression to evaluate to NULL. Operators IS and IS NOT have the same precedence as =.

19:18 Reply: How can I fill a NULL with previous Data ? (artifact: 547509170b user: dvdraymond)
You are of course going to need an ordering. Whether it's a datetime, id, one column, more, etc. Otherwise this doesn't make sense. For the sake of my example here I'm going to use a single column unique value ordering. We can adjust the answer when you let us know how you have it ordered.

This might not be the best, but it seems to work. Of course, you could always just do this in your controlling application rather than figuring out how to get the query engine to do it.

create table weather_table (ordering_column integer primary key, IP_24_Temperature real, IP_24_Humidity real);

insert into weather_table values (1, 16.5, null), (2, null, null), (3, null, null), (4, 16.8, 58), (5, null, null), (6, null, null), (7, null, null), (8, null, 60), (9, null, null), (10, 16.5, 61);

with recursive foo (ordering_column, IP_24_Temperature, IP_24_Humidity) as
coalesce(IP_24_Temperature, 0), --<value you want if the first temp is null>
coalesce(IP_24_Humidity, 0) --<value you want if the first humidity is null>
from weather_table

union all

coalesce(weather_table.IP_24_Temperature, foo.IP_24_Temperature),
coalesce(weather_table.IP_24_Humidity, foo.IP_24_Humidity)
foo inner join weather_table
on weather_table.ordering_column = (
    select ordering_column
    from weather_table
    where ordering_column > foo.ordering_column
    order by weather_table.ordering_column
    limit 1
select * from foo
order by ordering_column;
18:16 Reply: ATTACH DATABASE and DETACH DATABASE (artifact: aaebad304e user: dvdraymond)

But I see some SQLite tutorials attach the same database file multiple times, what does it mean then?

I can't think of a single reason why someone would want to do that. Do you have an example tutorial page where they're showing that?

In practical purposes it basically just means you have more than 1 name for the same thing.

15:04 Reply: Password protection to database (artifact: ba32d15862 user: dvdraymond)

How are you protecting the database? The base SQLite engine doesn't have any password/encryption options. You have to either use the paid SEE option for encryption, or some third party solution.

16:14 Reply: Column ordering and space saving (artifact: 79ac94dffa user: dvdraymond)

You can see the file format here

For a record of 5 columns, you basically have 5 numbers that give the value types, and then the 5 values. No padding, spacing, etc. And nothing that explicitly says what value is what column. You get 5 values in order with no info about what is what. It just knows from the table definition in sqlite_master what the columns are and their names. For a number of values like NULL there isn't even any data in the value section. The type for the column specifys NULL, and then there is no payload. So it would be kind of hard to make it more compact.

14:47 Reply: Detecting mismatched WAL files (artifact: 4e918767e2 user: dvdraymond)

That is section 1.4 in How To Corrupt An SQLite Database File and is indeed the expected behavior.

You can see what's in the header for the WAL file here "What database did it come from" is stored as the name of the file itself. I'm not quite sure what else you could put in there as a validation. Maybe the file change counter? Or does that not get used in WAL mode?

13:51 Reply: In-memory database from sqlite file (artifact: 6df5220c46 user: dvdraymond)

This is an interesting point. Does this mean that by doing this the risk is limited to losing some records (or not being able to read anything at all)? Or is this completely undefined behaviour (so we could read garbage data that is not present in the DB)?

I would assume so yes. Let's see if we can construct a problem.

We've got a 1 table database. table1 is made of a single leaf page on page 2. sqlite_master is on page 1 with a record saying table1 starts on page 2.

You're now inserting into table1, and it's grown enough that it needs a new page to hold data. Since it's now more than 1 page long, we're gonna need a new trunk page as well as another leaf page. Let's put the trunk page on page 3, and the new leaf page on page 4.

So we're at least gonna have to update page 1 with the new root page of table1 which is now page 3. We're gonna have to update page 3 with the pointers to pages 2 and 4. And we're gonna have to update page 4 with the new insert into. If we're inserting out of rowid order we might need to update page2 with the new info, and move some records to page 4.

Depending on the order these things happen and when you jump in there, you can see all sorts of potential problems. The SQLite library doing the inserts doesn't care, it has the file locked, so from its point of view no one else should be reading.

If we update sqlite_master first before appending to the file then we're pointing to a page that doesn't exist, and you won't see any records. If we move records from page 2 to page 4 before we update sqlite_master, then you could miss those moved records because you've got the wrong root page. Pick any order you want and you can find a spot to jump in and see an incomplete or broken view.

Depending on the circumstances the time windows for getting a bad view might be tiny or they might be huge, but they're definitely there.

13:17 Reply: Nesting of statements (artifact: f976802754 user: dvdraymond)

sqlite3_execf(db, "UPDATE table SET x=1 WHERE id=?", id);

Also, pardon the newbie question from me: What is sqlite3_execf? I don't see it in the C function list and sqlite3_exec has a completely different parameter list.

13:08 Reply: Nesting of statements (artifact: 4b03e859b7 user: dvdraymond)

I know this was just sample code, but of note I think that updating the same table that you're iterating over is fraught with peril. Better would be something like getting all the IDs you want into a temp table first, then doing the updates based off of what's in the temp table. By updating the table you're iterating over you could miss records, get duplicates, maybe end up in an unending loop, who knows. (Well, the others on this list with more experience than me probably know, and can chime in)

12:35 Reply: Documentation does not show when a feature was added (artifact: ad20d2d493 user: dvdraymond)

On the downloads page there's a documentation zip file included in the list of available files. So if you want to have the documentation from a specific release you can go back and grab that file for the release you're using.

Of course... there doesn't seem to be a quick link to that, so you'd have to alter the url and rely on consistent naming, which isn't the optimal solution, but hey.

Looke like https://www.sqlite.org/<year of release>/sqlite-doc-<version number>.zip

There's probably a way to do it with fossil, but I'm not familiar with that.

13:31 Reply: In-memory database from sqlite file (artifact: a1702ad31f user: dvdraymond)

Consistency is assured by use of either the rollback journal or the wal file depending on the journaling mode. The two files together (the main db and the rollback journal/wal file (if they exist)) are what makes up a complete database which is recoverable. You can be in the middle of writing to the main file and have the main file completely messed up, but because we made a backup copy to the rollback journal, then we can recover if everything dies while we're mucking around in the main file.

Have you ever done the case of "let me make a copy of this before I edit it in case I mess something up"? The rollback journal is basically that, but at the page level rather than the whole file level. So at any given point in time, the contents of the main file could be completely broken because we're making changes, but it's ok because we made a copy over there of everything we touched in case we need it to get the original back.

Since you're completely ignoring the locking mechanism and just taking the file contents by brute force whether the process using it is done cleaning up after itself or not, then you can't be sure what you're grabbing is complete.

13:10 Reply: Disable -wal option (artifact: 858d857575 user: dvdraymond)

When it comes down to it, SQLite on its own is just a fancy thing that edits a file. If you have rights to open something with a SQLite program, you can open it with any other program. If you compile your specific program with all sorts of fancy stuff removed, but don't trust the user, then it doesn't matter. They can just open up the file with their own version of SQLite compiled with all the bells and whistles still in it. Or open it and write to it with any other program.

It's like putting all sorts of fancy things into your own specific text editor. Someone can just shrug and say "eh, I'll just use notepad to edit it instead." Unless the only access to the data is via a trusted program, then you can't really enforce anything.

13:49 Reply: In-memory database from sqlite file (artifact: 80fac46c4e user: dvdraymond)

I believe you'd be looking to use the deserialize interface.

Bear in mind though that the other process could be in the middle of writing to the file at any moment, and thus your memory copy of the file contents could be completely inconsistent and broken. There's pretty much nothing you can do to prevent that when you're skirting around the mechanisms put in place to ensure consistency.

16:44 Reply: Error: no such column: x (artifact: 7212171ac9 user: dvdraymond)

Again, because the FROM clause of your select statement does not include any table named or aliased as "o".

The only things in your FROM clause are "Project_List AS a" and "ABT_Budget AS b". "o" does not appear in there anywhere.

In particular, "maxes" which you made with your CTE does not show up in your query, so all the stuff you have at the start for defining "maxes" is thrown out and not used, because you don't use it anywhere.

You alias a subquery as "o" inside the definition of "maxes". That alias of "o" only works inside that definition of "maxes", you can't use it anywhere else.

14:53 Reply: get changed columns data (previous/ after) on update row and previous data before delete row (artifact: 9ba9a36e19 user: dvdraymond)
Triggers meanwhile are good if you want to do basic SQL with the changed values, like insert them into an auditing table. You can set up an on update trigger on each field to insert into an auditing table the field name, change time, old and new values, etc. Or an on delete trigger to insert all the old fields into a delete table along with the time.

With triggers it's all going to be handled by the SQLite engine in the background and none of the values or info will be sent back to the controlling program. It will basically be "well, you left me instructions on what to do on a delete, so I did it. No need to bug you every time about it."

--Not necessarily the best way to do it, but an example.

create table table1 (a text, b int, c float);

create table table1_audit
(change_time, change_type, old_a, old_b, old_c, new_a, new_b, new_c);

create trigger trg_table1_deletes
before delete on table1
insert into table1_audit (change_time, change_type, old_a, old_b, old_c)
values (datetime('now'), 'delete', old.a, old.b, old.c);

create trigger trg_table1_changes --only the fields that changed
before update on table1
insert into table1_audit
values (datetime('now'), 'update',
case when new.a is not old.a then old.a end,
case when new.b is not old.b then old.b end,
case when new.c is not old.c then old.c end,
case when new.a is not old.a then new.a end,
case when new.b is not old.b then new.b end,
case when new.c is not old.c then new.c end);

sqlite> insert into table1 values ('one', 1, 1.0), ('two', 2, 2.0);

sqlite> delete from table1 where b = 1;

sqlite> update table1 set c = 2.1 where b = 2;

sqlite> update table1 set a = 'three', b = 3, c = 3.0 where b = 2;

sqlite> select * from table1;

sqlite> select * from table1_audit;
2020-09-16 14:31:10|delete|one|1|1.0|||
2020-09-16 14:31:20|update|||2.0|||2.1
2020-09-16 14:31:37|update|two|2|2.1|three|3|3.0

More ↓