SQLite Forum

Timeline
Login

50 most recent forum posts by user ddevienne

2021-07-20
16:02 Reply: cycle detection in CHECK constraint with recursive CTE (artifact: cb6417fdf5 user: ddevienne)

OK, sure. What I propose is a pure TREE, a simpler form of DAG indeed.
But still, what you have will not scale well. At least index the from/to tag,
and probably also have a UNIQUE(from, to) contraints too.

That way the WHERE from = ... queries will range-scan that auto-index,
and either the WHERE to = ... ones will skip-scan it using that same index,
or create an explicit (non-UNIQUE) index on it. Look at the plans.

You probably also want to ON DELETE CASCADE in your FKs, to auto-delete edges,
when tags are removed. Unless your ORM does that in code instead?

And you also don't need an id column for edges, but I suspect it's your ORM adding it, right?
I'd make the edges table a WITHOUT ROWID table, and have the PK be (from, to),
effectively replacing the UNIQUE-index I proposed above.

Still, I somehow feel there's a better way to do what you want, I just don't see it now. Maybe I'm just imagining things.

PS: Also, don't forget to enable pragma foreign_keys = 1, unlike me!

14:35 Reply: ROWID in compound SELECT (artifact: 79fa2cae2e user: ddevienne)

See point #3 of https://www.sqlite.org/changes.html for 3.36

14:32 Edit reply: cycle detection in CHECK constraint with recursive CTE (artifact: 04a82fb13f user: ddevienne)

I think your schema is to blame. If you want a DAG, then use a single table, with a self-referential parent.

Most DBMS's implement immediate Foreign-Keys by default, unlike SQLite. (see correction in follow-up posts)
So with a table such as:

CREATE TABLE IF NOT EXISTS "tag" (
  "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, 
  "name" varchar(40) NOT NULL UNIQUE,
  "parent" integer REFERENCES "tag" ("id"),
  UNIQUE(parent, name) -- or UNIQUE(name), depends what you want
);
You cannot event insert a cycle with that schema, ensuring a DAG.

With deferred FKs, the only mode in SQLite (see later posts), you then need to resort
to a TRIGGER to detect cycles, but that trigger can be much faster,
since starting from the leaves, instead of the roots like yours above.
It only has to detect a duplicate ID when traversing from the one leaf
(that fired the trigger) to its root, which is log(N).

Maybe I missed something, and the above is wrong. I'll know soon I guess :)

PS: AFAIK, a CHECK constraint can only access the local row, not do an arbitrary query,
so that approach was flawed from the get go I believe. Happy to be corrected on that.

14:31 Edit reply: cycle detection in CHECK constraint with recursive CTE (artifact: 2a91f2f203 user: ddevienne)

Well, this looks like deferred to me, no?

C:\Users\ddevienne>sqlite3
SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table tag (id integer primary key, name text, parent integer references tag (id), UNIQUE(parent, name));
sqlite> begin;
sqlite> insert into tag values (1, "leaf", 0);
sqlite> insert into tag values (0, "root", null);
sqlite> commit;
sqlite> select * from tag order by id;
0|root|
1|leaf|0
sqlite>

Update: Rah, I always forget about pragma foreign_keys = 1; !!! So you are right indeed. My bad!

C:\Users\ddevienne>sqlite3
SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> pragma foreign_keys = 1;
sqlite> create table tag (id integer primary key, name text, parent integer references tag (id), UNIQUE(parent, name));
sqlite> begin;
sqlite> insert into tag values (1, "leaf", 0);
Error: FOREIGN KEY constraint failed
14:30 Edit reply: cycle detection in CHECK constraint with recursive CTE (artifact: 3d34635057 user: ddevienne)

Well, this looks like deferred to me, no?

C:\Users\ddevienne>sqlite3
SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table tag (id integer primary key, name text, parent integer references tag (id), UNIQUE(parent, name));
sqlite> begin;
sqlite> insert into tag values (1, "leaf", 0);
sqlite> insert into tag values (0, "root", null);
sqlite> commit;
sqlite> select * from tag order by id;
0|root|
1|leaf|0
sqlite>

Update: Rah, I always forget about pragma foreign_keys = 1; !!! So you are right indeed. My bad!

14:28 Reply: cycle detection in CHECK constraint with recursive CTE (artifact: 4c3adefb82 user: ddevienne)

Well, this looks like deferred to me, no?

C:\Users\ddevienne>sqlite3
SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table tag (id integer primary key, name text, parent integer references tag (id), UNIQUE(parent, name));
sqlite> begin;
sqlite> insert into tag values (1, "leaf", 0);
sqlite> insert into tag values (0, "root", null);
sqlite> commit;
sqlite> select * from tag order by id;
0|root|
1|leaf|0
sqlite>
12:23 Reply: cycle detection in CHECK constraint with recursive CTE (artifact: d13515cf30 user: ddevienne)

I think your schema is to blame. If you want a DAG, then use a single table, with a self-referential parent.

Most DBMS's implement immediate Foreign-Keys by default, unlike SQLite.
So with a table such as:

CREATE TABLE IF NOT EXISTS "tag" (
  "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, 
  "name" varchar(40) NOT NULL UNIQUE,
  "parent" integer REFERENCES "tag" ("id"),
  UNIQUE(parent, name) -- or UNIQUE(name), depends what you want
);
You cannot event insert a cycle with that schema, ensuring a DAG.

With deferred FKs, the only mode in SQLite, you then need to resort
to a TRIGGER to detect cycles, but that trigger can be much faster,
since starting from the leaves, instead of the roots like yours above.
It only has to detect a duplicate ID when traversing from the one leaf
(that fired the trigger) to its root, which is log(N).

Maybe I missed something, and the above is wrong. I'll know soon I guess :)

PS: AFAIK, a CHECK constraint can only access the local row, not do an arbitrary query,
so that approach was flawed from the get go I believe. Happy to be corrected on that.

12:02 Reply: cycle detection in CHECK constraint with recursive CTE (artifact: ad291bd3ec user: ddevienne)

FWIW, WHERE EXISTS often uses a SELECT 1 FROM ... WHERE ... query, because it does not matter what you select,
it's the mere fact a single row exists that matters. So selecting anything else is just wasting cycles :).
Not that it matters performance-wise, most likely, so just an FYI.

2021-07-19
12:11 Reply: Populating a tree from a flat table of subcategories (artifact: c9993bacc6 user: ddevienne)

Start with https://www.sqlite.org/lang_with.html

But could be done with a 3-way join too, since you explicitly want to do 3 levels only.

2021-07-08
17:24 Reply: How to detect invalid trigger w/o actually running it? (artifact: eea3380888 user: ddevienne)

Thanks for the suggestion. Although with triggers, it can't be just SELECTs,
and coming up with all kinds of dummy statements to explain, so one can be
sure there's no mistake hiding in the schema seems less than ideal.

Surely there has got to be a better way?

And if not at the moment, shouldn't there be a pragma check_schema in the future,
that checks everything declared in sqlite_schema is OK, referencing existing tables/views/columns, that is opt-in.

08:40 Post: How to detect invalid trigger w/o actually running it? (artifact: 9264574ffd user: ddevienne)

Hi. We've had a mishap with a trigger referencing an unknown table,
which was detected too late, by internal users. The SQL script doing
the upgrade (that added the faulty trigger) was run during tests,
and installed successfully w/o errors, but of course actually using it
yielded errors.

Validating all triggers at runtime is currently not something we can do...

So is there a way to quickly detect any invalid objects, like triggers,
views, etc... which does not implies running them (via a DML statement)?

Something we could run post-schema-upgrade or post-schema-creation, to
avoid a repeat of the above mishap?

Thanks, --DD

PS: Note that we may have pragma_legacy_alter_table, I'm not 100% sure,
on the connection doing the upgrade, if that makes a difference.

2021-07-07
09:29 Edit reply: How ddo I specify options with MSVC (artifact: f31f52a8fe user: ddevienne)

You just add /D "SQLITE_MAX_COLUMN=16384" on your CL cli.

You may also want to add /O2 to make it fast too.

But of course, having 16K columns is suspicious from a relational design POV :)

09:24 Reply: How ddo I specify options with MSVC (artifact: 0ae9c9a83e user: ddevienne)

You just add /D "SQLITE_MAX_COLUMN=16384" on your CL cli.

But of course, having 16K columns is suspicious from a relational design POV :)

2021-07-06
19:02 Reply: shared in memory db: :memory:+shared cache OR memdb VFS? (artifact: 6700ab1f9f user: ddevienne)

Well, the point is that shared-cache is frowned upon, and kinda deprecated,
except on very low resources devices, which is precisely not the case of wanting
to use it on large multi-core desktop machines. Except shared-cache was the only
known way to have separate SQLite connections sharing the same in-memory DB within
the bounds of a single process, allowing concurrent access to the same DB from several
threads. Until MemDB VFS. But then you say they end up being the same code... that's confusing.

Shared-cache implies normal Journal mode, with readers blocking writers, and the writer
blocking readers. WAL not supported. While nothing prevents in theory a VFS from supporting
WAL-mode, as long as that VFS supports SHM, which it can if that VFS is in-memory.
Thus my question whether MemDB VFS supports WAL-mode for true MVCC.

Hopefully everything will be clearer soon. --DD

14:48 Reply: shared in memory db: :memory:+shared cache OR memdb VFS? (artifact: d33f5d5dd1 user: ddevienne)

Then no WAL-mode and no MVCC then, I guess :(

Too bad. I thought going for the VFS approach was precisely to gain WAL-mode
for shared in-memory DBs, thus I don't quite understand why a separate approach
was added, if it ends up going through the same code in the end.

There must be something I'm missing. Looking forward to the new doc.

14:16 Reply: shared in memory db: :memory:+shared cache OR memdb VFS? (artifact: 1704cb3c96 user: ddevienne)

And one question I had when I saw the 3.36 announcement regarding MemDB VFS,
was whether WAL mode was supported in the new mode, for MVCC.

All readers/writers are in the same process, so having shared-memory for WAL
downgrades to plain memory then, just with some synchronization, no?

2021-06-22
11:41 Post: Extend the ZIP vtable (artifact: a306810331 user: ddevienne)

Hi,

I'm dealing with very large ZIP files (> 300K entries, multi-Gigabytes),
and given how slow ZLib is (50 - 80 MB/s, depending on level and data),
and the fact I'm running on large desktop machine with many cores, I'd like to
parallelize the compression aspects, and leave the raw ZIP IO to the vtable.

On the decompression (read) side, the current vtable already has access to the
uncompressed data via the rawdata column, but not the CRC32 for that entry, to
be used by the outside parallel code to check the uncompressed data.

The compressed size is implicit from the blob returned from the uncompressed data,
although I guess that makes length(rawdata) more expensive that it should be, no?

On the compression (write) side, both the uncompressed size sz and rawdata must be
NULL for now, while in my case I'd like them to not be (and instead have data NULL).
And there's again no CRC32 column exposed, to write it directly (computed outside, in parallel, like the rawdata).

If I were to change zipfile.c to expose the crc and compressed-size columns (e.g. szz),
and allow writing (sz, rawdata, crc), would such a contribution have a chance to be incorporated?

Regarding length(data) (which sz replaces) and length(rawdata) (no equivalent column for now),
should we override the length() for that vtable? Is there even a way to know length() is called
on one of our special virtual columns, to answer the question directly, w/o actually reading and allocating
those blob values, when all one wants is their lengths?

Thanks, --DD

2021-06-11
08:44 Edit reply: Vritual Tables Performing SQL Queries (artifact: 0bd6ca49c1 user: ddevienne)

See https://github.com/0x09/sqlite-statement-vtab/blob/master/statement_vtab.c from this post

Another classic one would be a Pivot VTab, like this one

08:41 Reply: Vritual Tables Performing SQL Queries (artifact: d0a80b3d68 user: ddevienne)
2021-06-08
11:53 Reply: Feature request: Stored Procedures (artifact: d22b2c1eab user: ddevienne)

Exactly. Also agree SQLite is often used stand-alone.

Richard already has a mini-scripting language around, TH1, which is not my cup of tea, being TCL based (and thus entirely string-based), but simplicity matters in this context, and the fact it's already Richard's code helps a lot too. With a few extended commands to gain access to the SQL engine, for dynamic SQL, different from those used by Fossil, and you have most of what's needed for Stored Procedures, which Richard can decide to put in a reserved-name table sqlite_proc for example. Put that in the amalgamation, with an opt-in flag, and you have all the ingredients necessary. Sure a DB with these features would not be properly understood by older clients, but the same is true of many new SQLite features, and over times most clients would support them.

Where things fall down is for Richard to see the need for all that, and given the additional maintenance burden (shared with Fossil though, in part), and the fact his main uses of SQLite are embedded, where you have the full C language at your fingertips. Maybe it will come one day, but I doubt it.

2021-06-07
07:57 Reply: Transaction level triggers? (artifact: f76641770b user: ddevienne)

But the transaction could be implicit. Or involving other tables that don't need the Transact row.
Sounds like what you are really asking, is for statement-level triggers IMHO, rather than Transaction-level triggers,
which can thus be table-specific, and insert the row if needed (although how to know if the row is needed? Implies having a built-in for the current transaction somehow).

I've long wished for statement-level triggers in SQLite, as opposed to FOR EACH ROW ones. Been using the former in Oracle and PostgreSQL, and miss them in SQLite.

2021-05-18
11:16 Reply: Bug report: strange error message in cte and window function (artifact: 6e54dbeb22 user: ddevienne)

Hi Dan. Just curious regarding this commit:

The .test and .tcl files seem to overlap a lot, with one being a superset
of the other, with expected rows added to it. Can you please explain why that is?

(there's also a breakpoint added somewhere, as another difference).

2021-05-13
19:09 Reply: sqlite3_stmt_readonly is dependent on the current database (artifact: bac557cc17 user: ddevienne)

I personally think that's the correct behavior.

Statement classification should not depend on the current schema at all in fact.
And DDL statements are all basically non read-only, by design in my opinion.
And for DMLs, only SELECT can be read-only.

Unless I misunderstand what that API is supposed to return, just from its name (haven't read the doc).

2021-05-11
14:07 Reply: Fast way to insert rows in SQLite (artifact: ad4458bd32 user: ddevienne)

16KB was faster for me, but your row data is tiny, mine was bigger (strings and blobs, up to 1MB per row).

If you measured the time the producing side takes,
versus the consuming (SQLite) side, you'll see the majority is on the SQLite side (> 99% I suspect).

So creating many producers (1-per CPU) to parallelize the 1% or less of the total, is not going to help much. A single producer is enough.

Plus having many producers increases contention too.

W/o using a fixed-size queue, and having a producer much faster than the consumer side, you are actually accumulating too much memory in the queue, increasing allocation, etc...
A fixed-size queue blocks the producer when the queue is full, and wakes it up only after the consumer processed one-or-more items.

Measure peak RAM, in addition to wall time. With a fixed-size queue, the peak RAM should be tiny.
While w/o one, I expect the peak RAM to be much larger, close to the full data side in your case, the producing side being so much faster.

In a real world scenario, there's not such an imbalance between what SQLite must do, and what the rest of the code must do,
so the SPSC approach on two threads works fine, up to a maximum of 2x faster in the perfect case.
But it can't be lower than max(producer, consumer) so if your consumer is already at 99%, at most you save 1%...

At over 3M rows per seconds, you're already very fast. Can't get any faster IMHO.

Stephan's suggestion to use an in-memory DB is also a good one.
Will give you the maximum throughput w/o the vagaries of the filesystem.

10:52 Reply: Fast way to insert rows in SQLite (artifact: ab320568a4 user: ddevienne)

Here's the DDL from the OP's Rust code linked above:

        "CREATE TABLE IF NOT EXISTS user (
                id INTEGER not null primary key,
                area CHAR(6),
                age INTEGER not null,
                active INTEGER not null)",

08:48 Edit reply: Fast way to insert rows in SQLite (artifact: 568f5fce3e user: ddevienne)

Beside playing with the page_size, there isn't much that can be improved.

The default is 4KB pages these days, and you might gain a little with 8KB
and 16KB at insertion time, at the expense of more IO later on updates and
deletes, so there's a balance to find specific to what you are doing.

Note that you cache_size is affected by the page_size, unless you switch to
a negative value, for an absolute value in KBs. See pragma_cache_size

Multi-threading won't help at all on the SQLite side. But you can of course
prepare all your batches in a separate thread, and use an SPSC queue to pass
those batches around to the SQLite thread, to at least get some concurrency
between the SQLite code, and your own code.

(But you probably want your queue to be fixed-size and blocking when full,
to throttle the producer-side, which is going to be (much?) faster than the
SQLite consumer side actually doing the IO)

Maybe others will have further suggestions. --DD

08:40 Reply: Fast way to insert rows in SQLite (artifact: 27fdb02205 user: ddevienne)

Beside playing with the page_size, there isn't much that can be improved.

The default is 4KB pages these days, and you might gain a little with 8KB
and 16KB at insertion time, at the expense of more IO later on updates and
deletes, so there's a balance to find specific to what you are doing.

Note that you cache_size is affected by the page_size, unless you switch to
a negative value, for an absolute value in KBs. See pragma_cache_size

Multi-threading won't help at all on the SQLite side. But you can of course
prepare all your batches in a separate thread, and use an SPSC queue to pass
those batches around to the SQLite thread, to at least get some concurrency
between the SQLite code, and your own code.

Maybe others will have further suggestions. --DD

2021-04-28
12:56 Reply: Blobs & SQLITE_TRANSIENT clarification (artifact: 5bb2d1644a user: ddevienne)

See also Richard's answer here and his confirmation of the above here.

Yes, the doc could be a bit better I suppose.

08:12 Edit reply: Blobs & SQLITE_TRANSIENT clarification (artifact: 8340f64524 user: ddevienne)

sqlite3_bind_blob([...], SQLITE_TRANSIENT);
then, immediately delete pUniqueMemStrm without repercussion?

Yes. Because SQLITE_TRANSIENT means SQLite makes an internal copy.
So when the bind returns, you are free to release your own memory.
BUT, you are probably pessimizing things...

I use SQLITE_STATIC myself, AND make sure the memory stays around
until the _step; _reset; calls, after which I either rebind a NULL
or _clear_bindings, to make SQLite forget about that pointer.

Binds are sticky until explicitly cleared, or rebound over.
The above technique avoids unnecessary copies.

As to deleting a std::unique_ptr it's perfectly OK to .reset() it
instead of letting it do it implicitly when going out-of-scope, to release
memory as early as possible. RAII is about exception safety to avoid leaks
in this case, and there's nothing wrong with explicitly releasing resources earlier.

08:11 Edit reply: Blobs & SQLITE_TRANSIENT clarification (artifact: d2be307833 user: ddevienne)

sqlite3_bind_blob([...], SQLITE_TRANSIENT);
then, immediately delete pUniqueMemStrm without repercussion?

Yes. Because SQLITE_TRANSIENT means SQLite makes an internal copy.
So when the bind returns, you are free to release your own memory.
BUT, you are probably pessimizing things...

I use SQLITE_STATIC myself, AND make sure the memory stays around
until the _step; _reset; calls, after which I either rebind a NULL
or _clear_bindings, to make SQLite forget about that pointer.

Binds are sticky until explicitly cleared, or rebound over.
The above technique avoids unnecessary copies.

As to deleting a std::unique_ptr it's perfectly OK to .reset() it
instead of letting it do it implicitly when going out-of-scope, to release
memory as early as possible. RAII is about exception safety to avoid leaks
in this case, and there's nothing wrong with explicitly release resources earlier.

08:10 Reply: Blobs & SQLITE_TRANSIENT clarification (artifact: c6fe519852 user: ddevienne)

sqlite3_bind_blob([...], SQLITE_TRANSIENT);
then, immediately delete pUniqueMemStrm without repercussion?

Yes. Because SQLITE_TRANSIENT means SQLite makes an internal copy.
So when the bind returns, you are free to release your own memory.
BUT, you are probably pessimizing things...

I use SQLITE_STATIC myself, AND make sure the memory stays around
until the _step; _reset; calls, after which I either rebind a nullptr
or _clear_bindings, to make SQLite forget about that pointer.

Binds are sticky until explicitly cleared, or rebound over.
The above technique avoids unnecessary copies.

As to deleting a std::unique_ptr it's perfectly OK to .reset() it
instead of letting it do it implicitly when going out-of-scope, to release
memory as early as possible. RAII is about exception safety to avoid leaks
in this case, and there's nothing wrong with explicitly release resources earlier.

2021-04-21
13:52 Reply: The NUMERIC data type? (artifact: 437810a956 user: ddevienne)
07:20 Edit reply: Why no release announcement in this Forum anymore? (artifact: fa3a6af85b user: ddevienne)

FWIW, I double-checked I'm subscribed to the Announcements Topic, then went back to my GMail archive for the forum emails, and found the 3.35, 3.34, 3.33 release notifications indeed.

But OTOH, I didn't find any of the 3.35.x ones, nor the 3.34.1 one. I did find older patch releases though. GMail search does not quite appear exhaustive in what it returns, so did I miss some? Doesn't the forum have a page that shows all announcements, that we can look at? Has all the forum posts, so why not the announcements?

So the OP is not wrong that not all releases are consistently announced perhaps. Maybe it should be part of the release check-list? Or even automated perhaps as well?

07:19 Reply: Why no release announcement in this Forum anymore? (artifact: db976e5c4d user: ddevienne)

FWIW, I double-checked I'm subscribed to the Announcements Topic, then went back to my GMail archive for the forum emails, and found the 3.35, 3.34, 3.33 release notifications indeed.

But OTOH, I didn't find any of the 3.35.x ones, nor the 3.34.1 one. I did find older patch releases though. GMail search does not quite appear exhaustive in what it returns, so did I miss some? Doesn't the forum have a page that shows all announcements, that we can look at? Has all the forum posts, so why not the announcements?

So the OP is not wrong that not all releases are consistently announced perhaps. Maybe it should be part of the release check-list?

2021-04-19
14:22 Reply: How to Record all Transactions (artifact: 5ff9d5ce84 user: ddevienne)

Beside tracing, SQLite has various hooks (update, commit, pre-update),
and also an authorizer API, all giving information about what's going on,
which could be useful, depending on what you are trying to do exactly.

14:17 Reply: SQLite doesn't use indexes for bitwise clauses? (artifact: 77f12f0a1f user: ddevienne)

Yes, I assumed bit expressions involved positive integers only indeed.
I thought the bitwise operators required that. That's my only use-case for them in fact.

Perhaps the OP want to look into expression-based indexes and/or partial indexes instead?
That does assume his (or her) ? values have very low cardinality.

Anyways, as I wrote, I was just thinking aloud.

12:53 Reply: SQLite doesn't use indexes for bitwise clauses? (artifact: 4fee04fb03 user: ddevienne)

Let h be an integer with the high-bit (most significant) of ? set as it's only ON bit.
Only x values with that bit can be non-zero when ANDed with ?, which implies they are >= h.
Therefore x & ? = 0 implies x > h, which allows to start the scan traversal from somewhere
later than the start, thus avoiding a full scan? Just thinking aloud :). I suspect I'll soon be corrected!

08:59 Reply: usage of SHA-1 (artifact: 0dd4b73ef9 user: ddevienne)

You don't seem to get it. SHA-1 is part of the SQLite extensions,
to add additional functions and virtual tables on top of the SQLite core,
which anyone is free to add to their SQLite build or load as dynamic extensions.

In this case, it adds a function to compute SHA1 hashes. That's not bad
in and of itself, just as there's nothing wrong with computing a CRC32,
which is even more unsecure as SHA1...

How people use theses hashes is not SQLite's problem. Nor is SHA1 bad.
It's for what SHA1 hashes are used that matters. And SQLite doesn't use them.
SQLite also provides a SHA3 extension, right next to the SHA1 extension,
so no one is forced to use SHA1 either.

e.g. Fossil uses SHA3 now, used to use SHA1. But backward-compatibility
with older repos requires SHA1 support too, of course. So both are needed and useful.

Removing SHA1 from the SQLite repo would be a deservice to anyone that depends on it.
And is highly unlikely to be removed (I hope so at least).

2021-04-15
09:38 Reply: Noob Question (artifact: 71fdd2830e user: ddevienne)

Are you the author Ryan? Then maybe update the 18 month old SQLite? :)

New features in version 2.1.3: Support standard SQLite functionality and changes up to version 3.30.1

2019-10-10 (3.30.1)

2021-04-12
08:57 Reply: XML extension (artifact: eb86b8d3fa user: ddevienne)

Thanks for sharing! I do use XML inside SQLite, so that interesting to me.
Although ATM to do that outside the DB, just storing the XML.

One thing to note though is that pugixml is not Namespace aware.
Namespace prefixes have no meaning in XML processing (except for the xml: one)
and you're on your own with pugixml to get the Namespace URI that really matters.
For multi-Namespaces XML documents, that makes it challenging.

2021-04-09
07:18 Delete reply: json_contains Function Idea (artifact: 57536f3aeb user: ddevienne)
Deleted
07:17 Reply: json_contains Function Idea (artifact: 19e59eee76 user: ddevienne)

Aren't your examples equivalent to what json_extract(X, P) = V does?

Except for the case where the value is null in the X JSON doc, since then there's no distinction between a (SQL) NULL returned for P not found
and a (SQL) NULL extracted from a (JSON) null value. I'm not sure that case matters though. (also expr = NULL is never true of course).

2021-04-06
08:50 Reply: open db in read only; explicitly store diff as WAL ? (artifact: cdec30c813 user: ddevienne)

What you describe is exactly what SQLite's WAL mode does, if one does not checkpoint (which is active by default).

What's missing is the ability to use that .wal file independently, and apply it manually to the DB,
although someone managed to do that w/o patching SQLite (AFAIK), by careful use of the official SQLite3 API.

The SQLdiff utility is static. You'd need to keep an unchanged copy of your old.db, and diff with the resulting new.db,
to obtain SQL text for logical changes between the two. While the .wal file records physical changes to pages.

In any case, your use-case is not quite mainstream, and not directly supported by SQLite I'm afraid.

2021-03-23
10:23 Reply: When is SELECT ... FROM with no table-or-subquery valid. (artifact: 9cf9de6e84 user: ddevienne)

BTW: Oracle [...] provide an empty table named DUAL that
exists solely to fulfill an otherwise empty FROM clause

Actually, it's not empty but has a single row and column :)

2021-03-19
15:28 Reply: Constexpr parsing of sql statement (artifact: 8cbab819ea user: ddevienne)

To my surprise, constexpr SQL parsing is a thing :)
That one even tests against SQLite, according to its doc, so maybe you're in luck.

Constexpr, i.e. compile-time processing, is a C++17 (or newer) thing, and SQLite is pure C.
So you're quite unlikely to get it from the official SQLite authors I'm guessing.

2021-03-18
09:20 Reply: SQL to return list of PRAGMAs (artifact: 1daff211d2 user: ddevienne)
  1. it's an SQLite extension, from DRH himself
  2. Not sure what you mean. Of course they can be used.
  3. Many things in SQLite can be turned ON/OFF by a #define, including Geopoly.

So your Out-of-the-Box is simply that whoever built the SQLite lib or CLI you used,
included the Geopoly extension in the build, that's it.

Whether it should be considered built-in or not, well, who cares?
It does come from the SQLite3 authors (the main one in fact), but since
it's an extension, that's likely why it's not considered built-in I suspect. FWIW.

2021-03-17
10:43 Reply: SQL to return list of PRAGMAs (artifact: 7c3e45c4ec user: ddevienne)
2021-03-10
08:15 Reply: Verifying schema for application file format (artifact: cfae30e06a user: ddevienne)

Also look into SQlite's own DB hash utility, to check the whole DB.

2021-03-04
18:51 Edit reply: How to handle double backslashes when binding parameters in C API? (artifact: db16b33a84 user: ddevienne)

I don't need two backslashes in the final text of the object property p passed in the JSON string.

Well, according to the JSON grammar, you do in fact, IMHO.
So the JSON text stored in the DB should have two backslashes,
such that the output of a JSON parser has a single-backslash
for that text value.

I.e. the text in the DB supposed to be in JSON format must double
backslashes, if you expect to have those backslashes has text node
values in that JSON document.

When you extract a text node value out of a JSON document, the returned
text is no longer part of a JSON document, so does not need to respect
the JSON grammar, and can have a single backslash.

Basically you have to differentiate the lexical-space, and the value-space.
You also need to differentiate text in JSON-land, and outside JSON-land.

I hope that makes sense...

18:51 Reply: How to handle double backslashes when binding parameters in C API? (artifact: 333289adf1 user: ddevienne)

I don't need two backslashes in the final text of the object property p passed in the JSON string.

Well, according to the JSON grammar, you do in fact, IMHO.
So the JSON text stored in the DB should have two backslashes,
such that the output of a JSON parser has a single-backslash
for that text value.

I.e. the text in the DB supposed to be in JSON format must double
backslashes, if you expect to have those backslashes has text node
values in that JSON document.

When you extra a text node value out of a JSON document, the returned
text is no longer part of a JSON document, so does not need to respect
the JSON grammar, and can have a single backslash.

Basically you have to differentiate the lexical-space, and the value-space.
You also need to differentiate text in JSON-land, and outside JSON-land.

I hope that makes sense...

More ↓