SQLite Forum


14 forum posts by user SeverKetor

02:28 Reply: Assistance with using SQLite in PAMGuard (artifact: 859fadc035 user: SeverKetor)

This sounds more like a PAMGuard issue than an SQLite one. If I had to throw a dart blindfolded, I'd guess it's not committing transactions. Maybe there's a Save button/option/whatever that you need to hit? You'll have better luck if you ask for PAMGuard support elsewhere.

13:11 Reply: Expired certificate (artifact: 7405b1f92e user: SeverKetor)

I'm on my phone so it's annoying to check things, but are you sure the certificate problem is with SQLite's site? Looks like they use Let's Encrypt, which just recently had one of their root certificates expire finally. It is probably worth checking if whatever you're trying to download it on is still using the old certificate, and updating that if it is.

23:33 Post: Small Mistake in Delete Limit Railroad Diagram (artifact: 35b89409e5 user: SeverKetor)

Just noticed this as I was double-checking something, but the delete-stmt-limited diagram on <https://www.sqlite.org/lang_delete.html> shows one of the arrows coming from the ordering terms going to the end of the statement. Actually trying a statement like DELETE FROM table ORDER BY Column results in "Error: ORDER BY without LIMIT on DELETE".

19:41 Reply: How to find out whether a table is STRICT? (artifact: 5fdbd97e09 user: SeverKetor)

Looks like an official version was actually just added in https://www.sqlite.org/src/info/2c0e7ae541e9ecb8

03:01 Reply: adding record if it doesn't already exist (artifact: e0071ee2ef user: SeverKetor)

Add a unique index to table1 on email, then do INSERT OR IGNORE INTO "table1" ("email") VALUES ('email@addre.ss');

23:31 Reply: DELETE FROM … RETURNING (artifact: 1370b5258b user: SeverKetor)

I can dream up some uses where something like that would be nice to have. Maybe you have a returning clause on a delete query and want the output ordered by a column alphabetically or something. It wouldn't delete the rows in that order (or at least, it would not specifically try to); just change how you see the returned rows.

Given that you can just sort the output yourself outside of SQLite it's hardly important, but occasionally it would be nice.

01:38 Reply: about "strict" mode (artifact: 205428bada user: SeverKetor)

Looks like there's a few typos in the new documentation.

Second sentence: "...if a table column as has a type of..."

Header three: "The ANY data type" → "The ANY Datatype"

Last line in third section: "For a an ordinary non-strict table..."

Last line: "There there If there are multiple options, they can be specified in any order. To keep things simple, the current parser accepts duplicate options without complaining, but that might change in figure future releases, so applications should not rely on it."

06:09 Reply: about "strict" mode (artifact: 706e76a173 user: SeverKetor)
2. Restrict column data type names in CREATE TABLE statements to one of INT, INTEGER, REAL, BLOB, TEXT so that we always know exactly which datatype is allowed in that column.
I wonder if it may be useful to add a way to define new datatypes which STRICT could check. As an example, the json1 extension could define JSON as being TEXT and use the json_valid function to check it. On regular tables it would have no impact (except maybe changing the type affinity of JSON columns to TEXT instead of NUMERIC?), but on STRICT tables it would enforce valid JSON as if the column had a CHECK constraint on it.

This would be handy for backwards compatibility (I for one have code that relies on using the JSON datatype) and arguably allow for more clarity in the schema, if columns can retain their previous datatype.

On the other hand, it would increase the complexity and amount of code, and can be replaced with CHECK constraints and comments on the table schema, descriptive column names, etc.. Given that, I'm not sure it should be done, but I wanted to mention it in case others thought the same (or had better ideas on why it shouldn't be done).

20:02 Reply: Latest .expert fails when any table schema includes extension function (artifact: c34bb42879 user: SeverKetor)

Huh, neat. Thanks for letting me know.

I think it might be worth mentioning this in the .expert documentation. Just a message like "Note: ".expert" creates new database connections which will not retain loaded extensions unless they are Automatically Load Statically Linked Extensions or Persistent Loadable Extensions. A database schema that uses functions from an extension that is not persistent will cause the ".expert" command to fail"

On the topic of the .expert docs, I just spotted a missing 'e' in the last line. "Th functionality described[...]"

03:21 Post: Latest .expert fails when any table schema includes extension function (artifact: 8b43c643f0 user: SeverKetor)

I was trying to use .expert earlier when I found out that if you have a function from an extension in any table's schema, it just does not work. Typing .expert would just immediately result in "sqlite3_expert_new: no such function: {function name}" even when the extension the function comes from is loaded.

Maybe that error is necessary when the function would be called during the query, but in my case I have generated columns on tables which are stored. It shouldn't matter if my SELECT query references one of those columns since it already has the value. It especially shouldn't matter if I'm not even referencing one of those tables, but the .expert call doesn't even get to that point before failing.


SQLite version 3.37.0 2021-07-31 20:30:41
Enter ".help" for usage hints.
sqlite> .load ./Extensions/sql_functions
sqlite> .expert
sqlite3_expert_new: no such function: IP_TO_INT
sqlite> SELECT IP_TO_INT('');

As a workaround I was able to just create an in-memory version of the database with the generated columns replaced by regular columns, but I figured I should mention it here.

02:27 Reply: json_contains Function Idea (artifact: e83566d92a user: SeverKetor)

Neat, I'll have to look into that extension.

As of just a few days ago I started working on my own extension just for this function (slow going since I don't know C). It's still not quite done (I'm betting there are things I hadn't considered, like if the JSON was NULL as mentioned in your version. Also I use '#include "json1.c"' which results in a lot of extra stuff being compiled into the .so, which isn't great), but I'm thinking of posting it in this thread when I'm done.

A quick test showed 6-8x speed improvement over my python user-defined function and a 10-50% improvement over using json_each. Not really important for my usage since all uses of it in my code currently are fast anyway, but it's been fun figuring out how to make it work and learning a little C.

01:10 Reply: Bug report: equivalence transfer results in incorrect output (artifact: dd890b013c user: SeverKetor)

Just to let you know, the link to the post included with the changelog on the timeline is incorrect. It has "wiki?name=" in it which causes it to point to an empty page

20:09 Reply: How to run SQLite shell program on Android (artifact: ec7370398b user: SeverKetor)

Realized I had used the amalgamation so I updated to 3.35.4 and compiled from the raw source. It complained about me not having tcl but one "apt install tcl" later that was solved. Didn't have to do anything special when compiling, just used the configure script and then make. I can't get make install to work, but I'm kind of lazy and can't be bothered to troubleshoot it (it's complaining about /usr being read-only). I can still just use ./sqlite3 though, which is good enough for me.

23:22 Post: json_contains Function Idea (artifact: e419e520d0 user: SeverKetor)

I have searched the forum a few times now looking for a similar suggestion and have not found anything, and after reading the documentation multiple times there does not appear to be anything on the same level of simplicity as this suggested function (though I will admit to being blind as a bat at times).

#Description: I think there should be another function added to the JSON1 extension which would allow for checking whether or not a value exists in a JSON object or array. In my own code (using a user-defined function) I have named this json_contains(), and it does seem convenient to have. It would not add any missing functionality but it would make certain queries simpler and more readable. Using the style of the existing documentation, I would describe it as such:


The json_contains(X,P,V) function returns 1 if X is well-formed JSON and at path P there exists a JSON array containing the value V. If a JSON object lies at path P then returns 1 if there exists a key equal to to V. Otherwise returns 0 if the object or array does not contain the value V.

This would be equivalent to the Python "in" keyword when used with lists and dictionaries and would simplify WHERE clauses when the query is checking for rows with JSON columns containing some value. If the path specified does not exist, then the function would return NULL like json_extract() and json_array_length() do.

#Examples: ``` CREATE TABLE example ("ID" INTEGER, "Data" JSON); INSERT INTO example VALUES (1, '{"key":"object value"}'), (2, '["list value"]');

SELECT ID FROM example WHERE json_contains(Data, '$', 'key'); → Returns 1 SELECT ID FROM example WHERE json_contains(Data, '$', 'list value'); → Returns 2 SELECT ID FROM example WHERE json_contains(Data, '$', 'object value'); → Returns zero rows```

For simple select queries there is little gain when compared to using json_each() and writing the query like: SELECT ID FROM example, json_each(example.Data) WHERE json_each.value='object value'; but this may instead be used to remove a select sub-query from a delete or update statement.

DELETE FROM example WHERE json_contains(Data, '$', 'list value');
DELETE FROM example WHERE ID IN (SELECT ID FROM example, json_each(example.Data) WHERE json_each.value='object value');

#Notes: I'm fairly certain this would be new, but if I have missed something or some way of using json_each() or similar in update/delete queries I'd love to know. Writing the WHERE clauses this way seems intuitive to me, though I am undoubtedly biased due to coding in Python and using the "in" keyword often.