SQLite Forum

Timeline
Login

50 most recent forum posts by user larrybr

2021-10-12
03:28 Reply: Compiling sqlite system on Ubuntu (artifact: 6d057c6677 user: larrybr)

sudo gcc shell.c sqlite3.c -lpthread -ldl -lm

There is no need to use sudo for this.

but all I get is the a.out file.

The command, mv a.out sqlite3 , will rename the executable you got to what you wanted (I suppose.) Alternatively, replace "gcc" with "gcc -o sqlite3" in that compilation to get what you seem to expect.

last thing I did was was run ./configure and make, ... but when I do 'which sqlite3', there is no output.

The current directory is typically not among the directories listed in $PATH. Hence, 'which' does not find named executable files that happen to be in the current directory unless the current directory is so listed. Once you get yourself a properly named 'sqlite3' executable, the final step will be to place it somewhere such that it will be found by the shell (using $PATH) when invoked simply as 'sqlite3'.

2021-10-11
04:30 Reply: Updates to 22.1. The --safe command-line option (artifact: cbed526cae user: larrybr)

Thanks for the spelling pickup. The docs will update soon to reflect a correction.

is .shell disabled in --safe?

Yes, as is .system and anything that allows output to be written or shells to be run.

2021-10-10
03:48 Reply: Understanding memory allocation behavior of SQLite (artifact: 41db6d8ce5 user: larrybr)

Without gainsaying anything earlier, I wonder about this:

Alternatively, loading the data in sorted order and then creating the additional index (after the data is loaded) also works and is the only alternative for a WITHOUT ROWID table to avoid B-Tree rebalance operations.

I would think that the insert order would need to be close enough to or match a breadth-first traversal of the ultimate (as) balanced (as it's going to get) B-tree. Otherwise, lots of rebalancing will be needed because, ultimately the ordering will match a depth-first traversal but insertion in that order would necessarily create imbalances. (I'll be happy to be educated on this if I've got it wrong.)

2021-10-07
02:36 Reply: Assistance with using SQLite in PAMGuard (artifact: f55162a25c user: larrybr)

Without being a PAMGuard user, I can still suggest some clarifications that will help others (who might be PAMGuard users) to sort out what is going wrong for you. My comments follow quoted fragments:

I have loaded an SQLite database module into PAMGuard,

Is an "SQLite database module" the same as an "SQLite database"? Or are you simply saying that you have enabled PAMGuard to access SQLite databases generally?

when the analysis is finished, opeing the database in SQLite Studio shows no data

Here, it will be crucial to detail what "the database" means. We can assume that you intend to refer to a database that you intended to access from within PAMGuard, but you need to specify what, specifically, you wrote/did/clicked etc. to designate an actual database file for the SQLite library to open and operate upon.

shows no data (all fields are empty)

Are the expected tables present? Do you mean that no rows were inserted when they should have been? Please use conventional terminology, where rows in a table correspond to tuples of the column values. (The term "field" often means a member of such a tuple. Is that your usage here?)

even when I can see the detector modules are making detections.

If you were to briefly explain the importance or consequence of this in terms of resulting modifications upon the actual database opened by PAMGuard, folks here who are not PAMGuard experts may be able to help.

I have also attempted opening the SQLite database in PAMGuard Viewer after PAMGuard has finished running, but all of these fields are also empty.

To me, (a PAMGuard non-user), this means one of two things: (1) the effects of your PAMGuard run on the database it actually opens are not what you are expecting; or (2) the database being modified by your PAMGuard run is not the same one you are peering into with other tools (including "PAMGuard Viewer".) I do not see enough information in your post to distinguish these cases.

suggestions as to why I am not able to view the output in my SQLite database?

(Not to be flippant here, just analytical:) Either "the output" is an empty set, or it is landing somewhere other than "[your] SQLite database". Have you run an experiment where you use PAMGuard the same way, except with a different DBMS where "the database" unambiguously refers to something you can view with other tools? Or put differently: Why do you believe "the output" is not the empty set?

2021-10-06
17:58 Reply: journal file created when journal_mode=OFF (artifact: 914b9686f6 user: larrybr)

I have a few suggestions:
1. Find a way to avoid using the count_changes pragma. It is deprecated.
2. Reduce your scenario to a set of meta-commands and SQL statements that the CLI shell can run.
3. Read the docs for the members of the step 2 set.
4. Post here the minimal repro set of step 2 and remaining questions.

Time ordering of operations will have an effect upon the outcome, and is not apparent in your post. So please consider my 4-step procedure to be a good way to clarify what you are doing, (if this proves necessary after doing step 3.)

16:08 Reply: doc nit for sign() (artifact: 6514f13119 user: larrybr)

Thanks for the nit report. This has been fixed (redundantly.)

2021-10-05
17:52 Reply: Double section number (artifact: 9d923b5c5e user: larrybr)

Thanks for the tip. Fixed on doc trunk.

17:38 Reply: Utility extensions (Windows-only) (artifact: b6ee340328 user: larrybr)

Thanks for sharing (almost.) It could well be useful.

The link has ...

I could find no link.

2021-10-03
02:18 Edit reply: Strange parsing problem in the shell (artifact: 73876f06fa user: larrybr)

... I don't buy the hypothesis that my problem arises from a failed pipeline.

The misbehavior was not caused solely by the broken pipe issue. It was due to a bug which was exposed by a broken pipe among other scenarios. That bug was fixed today, by this checkin.

Can you try it?

2021-10-02
22:25 Reply: Strange parsing problem in the shell (artifact: 8ffb52506e user: larrybr)

I think this poor behavior was fixed by this checkin today.

Can you try it?

15:44 Reply: bug? Unexpected CLI shell comment handling (artifact: c2984b0e75 user: larrybr)

Kees: Please disregard my earlier comments about a behavior-changing revision. The behavior to which you were accustomed and the intended current behavior were identical. (This can be seen in the code, with some study.) The issue is that I allowed a simple bug to creep in during a recent revision made to avoid an O(N^2) performance degradation with huge block comments.

The current trunk version should behave more satisfactorily with that script of yours which recently failed.

14:44 Edit reply: bug? Unexpected CLI shell comment handling (artifact: 400c214193 user: larrybr)

(Edited to describe a change under consideration which addresses this behavior.)

The CLI's line collection process applies just a few rules, which can be summarized as:

  1. If the line begins with '.' and is the first since the last SQL or meta-command was processed, then it is a single-line meta-command and processed as such.

  2. If the line ends with ';' followed by nothing but whitespacea and is not within an SQL construct within which that character is considered part of the construct rather than a statement terminator, then the line together with its predecessors (that were not meta-commands) is submitted for SQL execution. (prepare, step with output, finalize)

In your example, the "--" which begins the line, per all SQL standards, makes that and the remainder of the line, whatever it may be, a comment.

It would have to be a special cased handler that would treat ';' as a SQL terminator only if it appears at the end and outside of all lexically delimited constructsb except end-of-line comments.

I do not see how the present behavior is weird. Should an end-of-line semicolon within a block comment also terminate the SQL to avoid this weirdness?

(Added via edit:)
This additional summarized rule would cure your script problem and should be harmless in all circumstances I am imagining:

3. As non-meta-command lines are accumulated, at any time that they constitute a complete SQL construct which is nothing but whitespacea the accumulation is discarded, the primary prompt is issued (if in interactive mode), and line processing reverts to the state where either meta-commands or SQL (finally ending with ';') are accepted.

Comments on any difficulties arising from this approach are more than welcome.


a. In this context, "whitespace" includes the usual not-dark characters and SQL comments.

b. The "lexically delimited constructs" are quoted strings and identifiers, and end-of-line or block comments.

14:03 Reply: bug? Unexpected CLI shell comment handling (artifact: 82b2656710 user: larrybr)

The CLI's line collection process applies just a few rules, which can be summarized as:

  1. If the line begins with '.' and is the first since the last SQL or meta-command was processed, then it is a single-line meta-command and processed as such.

  2. If the line ends with ';' followed by nothing but whitespacea and is not within an SQL construct within which that character is considered part of the construct rather than a statement terminator, then the line together with its predecessors (that were not meta-commands) is submitted for SQL execution. (prepare, step with output, finalize)

In your example, the "--" which begins the line, per all SQL standards, makes that and the remainder of the line, whatever it may be, a comment.

It would have to be a special cased handler that would treat ';' as a SQL terminator only if it appears at the end and outside of all lexically delimited constructsb except end-of-line comments.

I do not see how the present behavior is weird. Should an end-of-line semicolon within a block comment also terminate the SQL to avoid this weirdness?


a. In this context, "whitespace" includes the usual not-dark characters and SQL comments.

b. The "lexically delimited constructs" are quoted strings and identifiers, and end-of-line or block comments.

02:47 Edit reply: Strange parsing problem in the shell (artifact: 74b6cca12c user: larrybr)

This is a bit odd: ...

Here is a screen-scrape, just now taken: larry@Bit-Booger:~/SQLiteDev/LibTrunk$ ./sqlite3 SQLite version 3.37.0 2021-10-01 22:48:52 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> .import -csv "|iconv -f L1 < pets.csv | grep '^M' | cat -" incoming sqlite> .header on sqlite> select * from incoming; Moses|dog|12 Meowser|cat|17 sqlite>

I adjusted your inconv from encoding to reflect ones listed via -l, and replaced your "tail -r" with something that does not break the pipe (as happens with Ubuntu 20.04's tail which knows not the -r option.)

However, using an equivalent to your posted .import pipe extravaganza, I get: sqlite> .import -csv "|iconv -f l1 < pets.csv | grep '^\"20' | tail -r" incoming tail: invalid option -- 'r' Try 'tail --help' for more information. <pipe>: empty file sqlite> select * from sqlite_schema; ...> ; ...>

This appears to replicate your problem. There is a simple, near-term solution: Do not subject the CLI's piped-input-accepting commands to broken pipes. Meanwhile, I will investigate why such expectable input produces this strange result.

(A 2nd work-around appended via edit:)

It seems that output reaching a shared stderr stream may be involved in this problem: sqlite> .import "|cat /dev/null" junk <pipe>: empty file sqlite> select * from sqlite_schema; sqlite> .import -csv "|iconv -f L1 < pets.csv | grep '^M' | tail -r 2>/dev/null" incoming <pipe>: empty file sqlite> sqlite>

Note that an empty input pipe, by itself, does not trigger the misbehavior. It can be avoided by merely diverting the pipe-breaker's moaning away from the same stderr stream the CLI is using. So that would be another short-term work-around.

02:37 Reply: Subclassing & private data (artifact: 5f60195332 user: larrybr)

Actually, no. I'm wondering how do I write my code as to be forward compatible with future versions of Sqlite that may add new members to the structs without breaking my code written for the current versions.

If the SQLite library were to change the definition of the leading (or not yet "subclassed") portion of the structs it uses at the extension interfaces, it would break a great many extensions compiled against earlier versions of that publicly exposed data. This kind of backward incompatibility is carefully avoided by the project's management and the developers generally.

As I mentioned in my OP, C is not my forte and from what I saw so far, C has the convention of having a field in a struct to report the size of the structure or in Sqlite's case, the version supported.

Neither of those common practices is a C convention. You accurately perceive what the practice accomplishes. If the library designer(s) had elected to leave room for later, changed versions of those structs, a member named something like sizeof_public_portion might well have been used. However, along with such changed data structure would come changed requirements as to what is done with it or perhaps changed allowance as to what the extension might do with it. Those intriguing options for change are closed off by the API stability guarantees that are part of the library's interface.

How will I know that I won't be writing code that will break in a future version of Sqlite?

The same way that you know sqlite3_open() will do the same thing next decade as it does today.

01:27 Edit reply: Strange parsing problem in the shell (artifact: 918f8cbb46 user: larrybr)

This is a bit odd: ...

Here is a screen-scrape, just now taken: larry@Bit-Booger:~/SQLiteDev/LibTrunk$ ./sqlite3 SQLite version 3.37.0 2021-10-01 22:48:52 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> .import -csv "|iconv -f L1 < pets.csv | grep '^M' | cat -" incoming sqlite> .header on sqlite> select * from incoming; Moses|dog|12 Meowser|cat|17 sqlite>

I adjusted your inconv from encoding to reflect ones listed via -l, and replaced your "tail -r" with something that does not break the pipe (as happens with Ubuntu 20.04's tail which knows not the -r option.)

However, using an equivalent to your posted .import pipe extravaganza, I get: sqlite> .import -csv "|iconv -f l1 < pets.csv | grep '^\"20' | tail -r" incoming tail: invalid option -- 'r' Try 'tail --help' for more information. <pipe>: empty file sqlite> select * from sqlite_schema; ...> ; ...>

This appears to replicate your problem. There is a simple, near-term solution: Do not subject the CLI's piped-input-accepting commands to broken pipes. Meanwhile, I will investigate why such expectable input produces this strange result.

(A 2nd work-around appended via edit:)

It seems that output reaching a shared stderr stream may be involved in this problem: sqlite> .import "|cat /dev/null" junk <pipe>: empty file sqlite> select * from sqlite_schema; sqlite> .import -csv "|iconv -f L1 < pets.csv | grep '^M' | tail -r 2>/dev/null" incoming <pipe>: empty file sqlite>

Note that an empty input pipe, by itself, does not trigger the misbehavior. It can be avoided by merely diverting the pipe-breaker's moaning away from the same stderr stream the CLI is using. So that would be another short-term work-around. sqlite>

01:07 Reply: Strange parsing problem in the shell (artifact: 2bbac15098 user: larrybr)

This is a bit odd: ...

Here is a screen-scrape, just now taken: larry@Bit-Booger:~/SQLiteDev/LibTrunk$ ./sqlite3 SQLite version 3.37.0 2021-10-01 22:48:52 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> .import -csv "|iconv -f L1 < pets.csv | grep '^M' | cat -" incoming sqlite> .header on sqlite> select * from incoming; Moses|dog|12 Meowser|cat|17 sqlite>

I adjusted your inconv from encoding to reflect ones listed via -l, and replaced your "tail -r" with something that does not break the pipe (as happens with Ubuntu 20.04's tail which knows not the -r option.)

However, using an equivalent to your posted .import pipe extravaganza, I get: sqlite> .import -csv "|iconv -f l1 < pets.csv | grep '^\"20' | tail -r" incoming tail: invalid option -- 'r' Try 'tail --help' for more information. <pipe>: empty file sqlite> select * from sqlite_schema; ...> ; ...>

This appears to replicate your problem. There is a simple, near-term solution: Do not subject the CLI's piped-input-accepting commands to broken pipes. Meanwhile, I will investigate why such expectable input produces this strange result.

2021-10-01
23:43 Reply: MicrosoftODBC Driver Manager Data source name not found and no default driver specified error when use VBA connect Sqlite (artifact: 62a0fff8a0 user: larrybr)

I have used Werner's ODBC driver for SQLite many times, including in an application and as a plug-in with DBMS tools. It works well, and is a good citizen among ODBC drivers insofar as its management by the ODBC infrastructure works. So the locus your problem is within the set of things you are doing or should be doing.

Your problem is not a SQLite issue and it is off-topic here. Furthermore, you are much more likely to get a useful answer by consulting ODBC documentation and posing your problem somehwhere that ODBC issues are discussed and participants are familiar with ODBC infrastructure issues.

23:31 Reply: Is the lifetime of argv in sqlite3_module::xConnect greater than sqlite3_module::xFilter? (artifact: 0aff2778dc user: larrybr)

If I have a SQLITE virtual table, can I store a direct pointer value from argv (const char*) inside sqlite3_module::xConnect to use later in a call to sqlite3_module::xFilter? Specifically the address of the module name inside argv[0].

My study of the relevant docs shows them to be silent on the issue of argument lifetimes. So your question has several answers of varying utility.

I just need to know whether the lifetime of the module name is guaranteed for the life of the Virtual Table, or do I need to deep copy it.

I find no such guarantee, and I was looking for it carefully. I would think that means the usual guarantee applies: The passed argument values (whether indirect or not) can be relied upon during execution of the call. No longer lifetime is guaranteed.

(It seems to work).

Lots of things doomed to fail one day do not do so conveniently soon. That can be seen as bad luck or as a design failure.

I looked at the implementation calling into that xConnect method, and it looked like no needless copies were made. Whether that will change I dare not predict. So it appears that the same pointer you provided earlier is reaching your xConnect method. Is that so? And if so, I suggest, (plagiarizing): You've got to ask yourself one question: 'Do I feel lucky?'.

22:57 Reply: create taable with syntax error works ? (artifact: 12735ae7ec user: larrybr)

So you should use [STRICT] if it is too hard to remember to separate your column definitions as SQL requires.

This sort of left handed jab really isnt necessary

It was not meant as a jab. It can be hard sometimes to read code closely enough to realize what it means, even when one is fully familiar with the language being used. (I say this from experience.) Let's try that another way:

If you dislike having to keep in mind that SQLite will slurp up a long sequence of words as a typename, you will probably appreciate the STRICT feature.

I was just surprised that there wasnt any apparent error warning or any other indication that I had missed a single comma Every other DB ...

The fact that you would submit such a sequence to the SQLite compiler, and expect something different, shows that your expectations developed from work with other DBMSs need some adjustment, at least until you qualify table definitions with STRICT. That's the practical response rather than railing about SQLite behavior that is decades old and certain not to change. I am sympathetic about the feeling of having a well-entrenched expectation found wanting, but it's the price to be paid when one chooses to not become familiar with one's tools, hoping assumptions about them will be enough.

22:33 Reply: Subclassing & private data (artifact: efb6b532ec user: larrybr)

However, I wonder how versioning is supposed to work. Some implementation will put in a size parameter to size the structure as to infer the version of the struct. Without that information, a new version of struct could potentially read private data from the old version and therefore garbage. But I don't see such member in sqlite's structure. Does the member iVersion of the struct sqlite3_module handle this?

I'm reasonably sure that the iVersion member is intended to deal with attempts to use extensions written for SQLite major version 3 with some later version(s). In other words, it addresses possible changes in the extension host.

The problem you are wondering about is how to deal with changes in your own code. (I presume you are writing such, or contemplate doing so.) For that, you probably need a mechanism that is available even before attempting to "attach" an extension after it is loaded. And that mechanism will be one you devise or model after similar solutions. This is a non-trivial problem, especially with dynamically loaded code. And the solutions vary according to how much and what kind of change is anticipated. You should not expect a ready-made solution to that general problem in the SQLite library or project.

I say this to perhaps put an end to your wondering how that problem is already solved. It is not.

You may recognize, in the pre-defined structs for various extension types, the polymorphism that is available in C++ and other fully OO languages. Typically, that is utilized purely through variation in what the methods do, without relying on varying data being somehow dealt with at the interface. I suggest that this sort of discipline will simplify how you might insulate your extension design from implementation changes or behavior evolution at the interface.

18:23 Reply: Strange parsing problem in the shell (artifact: aac444acc7 user: larrybr)

Upon closer reading, I agree. BTW, I have exercised pipe-style meta-command input quite a bit and have not seen the sort of strangeness you reported. So I look forward to seeing what is special about your input.

17:20 Reply: MicrosoftODBC Driver Manager Data source name not found and no default driver specified error when use VBA connect Sqlite (artifact: b26ac434ed user: larrybr)

This is not really a SQLite issue. It is an ODBC driver management issue.

BTW, setting object-referencing variables to Nothing just before they go out of scope is not necessary. (That advice is also off-topic here.)

16:26 Reply: Strange parsing problem in the shell (artifact: adf0302463 user: larrybr)

That ought to work, almost. (I expect you need to double that backslash.)

I suggest using .print to see what your actual submission to the shell meta-command is.

As for the continuation prompt: I am with you on expecting that the primary prompt after that hairy piping means that it at least completed (even if it did not do quite what you wanted.)

My suspicion would be that you may have exposed a bug in the shell's own input source switching logic. Later today, I will take a look at some less hairy equivalent meta-commands and see what is going on.

Isn't the parser completely reset after each line that it parses successfully?

Yes, for meta-command input. But the input source switch and restore may have become confused. It bears investigation.

Please feel welcome and encouraged to add more detail to the working/failing boundary on this.

2021-09-30
20:56 Reply: sqlite3_exec: Any way to validate the first argument? (artifact: 19c7cd1b0e user: larrybr)

I doubt that this is news to Ryan; I say the following to avoid any interpretation that seems to condone any invitation to undefined behavior.

If a sqlite3 pointer was returned (as an out parameter) by the sqlite3_open*() family of APIs, then a test such as Ryan's simple SELECR is fine (but pretty much useless with respect to proving anything reasonably in doubt.)

However, if you have something that you suspect is not a return from one of the sqlite3_open*() functions, passing it to any SQLite library API which expects a sqlite3 pointer, in lieu of such a pointer, is a very bad practice and should never be done in released code held out as the creation of any responsible programmer(s). Such folk do not court undefined behavior and strive to avoid it. Yet, that is exactly what such a "test" invites.

17:44 Reply: sqlite3_exec: Any way to validate the first argument? (artifact: b74ca3d6ee user: larrybr)

I'm having difficulty understanding the difference between a "valid" database ("handle") and one that is not "valid". What does that mean, objectively? (I would have guessed as Mark did until your post 3, and given a similarly snark-inducing response.)

Guessing that it means "the one I want, which can be so verified by having certain schema elements", I recommend that you query the sqlite_schema table.

2021-09-29
15:38 Reply: Possible bug in group_concat() ? (artifact: b671a2698c user: larrybr)

Mark, and any other interested reader:

The group_concat() function is documented, for N effective calls, to concatenate N argument 1 values separated by N-1 argument 2 (or its default) values. It is completely undocumented which one of the passed-in argument 2 values is dropped.

In the interest of code simplicity and performance, I am about to cause the Nth argument 2 value to be dropped instead of the 1st argument 2 value as happens with existing releases.

I would be interested to see the best argument for retaining the current undocumented behavior, in particular some affected, reasonable use case(s). (No need to bother saying undocumented behavior should never change. We see that as having value which competes with other values.)

02:51 Reply: Possible bug in group_concat() ? (artifact: b1302c1dee user: larrybr)

My recollection is that the v3.37.0 release is nearly a month away. Its date is not fixed either, and depends on testing results.

01:12 Reply: Possible bug in group_concat() ? (artifact: d2d4f6e5f5 user: larrybr)

Thanks for the idea. As it turns out, the problem is in the group_concat() logic for removing row effects from the aggregation as a window slides. The misbehaving logic was to remove a concatenee and the separator with which it was associated in the call to group_concat(). Unfortunately, the first such call in an aggregation tosses the separator; the first utilized separator is that passed in the next group_concat() call. So, somehow, the removal has to remove the Nth passed concatenee and the N+1th passed separator. This requires some state which is not available to the function without it storing separator lengths embedded in the aggregated concatenation. Because that could be expensive, and is rarely needed (because separators usually do not vary), this storage is undertaken only when needed.

2021-09-28
20:20 Reply: Reset database (artifact: ab2f736822 user: larrybr)

After

.open mydb.db

On checking the properties of mydb.db, its size is 0.

That means very little considering that the shell, as a convenience for people who quit the CLI before actually doing anything to the named "database", defers its actual creation until something is done that would modify it.

I did not want to simply drop the tables and views as I believe that will lead to fragmentation. I am not sure whether VACUUM locks the database.

There can be no fragmentation without content interspersed among free pages. And VACUUM cures it anyway. And who cares if VACUUM locks the database? You would surely not want to "reset" it underneath other connections to it, so being locked would be good then. And if there are no other connections, it will not be locked.

Setting the file length to zero is faster than ...

Sorry, but that's just crazy. If you do manage to truncate the file to zero length, it will not be a SQLite database any more, not even an empty one.

I'm now fully with Warren on this being an XY problem. Please describe the problem you intend to solve rather than difficulties with solutions you have preconceived.

19:32 Reply: Reset database (artifact: 436726f143 user: larrybr)

I think your method for doing this should depend upon what "when it was created" means. If it means "before any tables (or table-like objects) and/or data were created, but after certain non-default DB-wide settings were established", then you can drop all the views, indices and tables, then do a SQL VACUUM. If things were done after the sqlite3_open call, such that the "created" state differs from the empty state all that dropping produces, then you will need to undo (where you can) the settings that are different. See the pragma list to peruse what those settings might be.

19:17 Reply: How to change sqlite3_temp_directory in windows to change temporary file location (artifact: 0d7aeaddd1 user: larrybr)

You need to study "Name Of The Folder Holding Temporary Files", carefully.

In my opinion, if you are going to takeover that decision, you may as well do it decisively rather than indirectly via an environment variable.

Please note that these comments do not constitute any condonation contravening the cautions mentioned in the above-linked doc page.

16:46 Edit reply: Possible bug in group_concat() ? (artifact: 2ea4a8dc98 user: larrybr)

As I read the code which implements group_concat(), I do not see why it does this. It does appear to be a bug, and it is somewhat alarming as well. (It makes me wonder what happens when the effect is more pronounced.)

As you likely know, group_concat( whatever, separator ) is normally used with a fixed separator rather than one which can change with each member of the aggregate. This is why such strange behavior has escaped notice for so long.

I will take a look at this running under a debugger.

(Update via edit:)

It is pretty clear why this happens. As the window moves, concatenated values are being removed from the front of the accumulation and the wrong separator (off by 1) values are "removed". I'm devising a fix that does not penalize the common, non-varying separator case while handling the weird, varying separator case.

14:40 Reply: Possible bug in group_concat() ? (artifact: 7aea41c03a user: larrybr)

As I read the code which implements group_concat(), I do not see why it does this. It does appear to be a bug, and it is somewhat alarming as well. (It makes me wonder what happens when the effect is more pronounced.)

As you likely know, group_concat( whatever, separator ) is normally used with a fixed separator rather than one which can change with each member of the aggregate. This is why such strange behavior has escaped notice for so long.

I will take a look at this running under a debugger.

00:12 Reply: parse.y: a typo in a special comment (artifact: af789ed5b0 user: larrybr)

While it is a typo, it is inconsequential. I doubt those reading that code to discern its purpose and effect are even in a frame of mind where grammar registers with them. The "/* ... */" construct is a comment, and remains such as lemon puts the brace-delimited code into the generated parse.c file. So there is no functionality impact.

We appreciate reports of minor errors in anything that either ends up in the docs or in active (non-comment) code, so I do not want to discourage your reporting. Yet this flaw should be overlooked.

2021-09-27
23:46 Reply: Tiny nit in ALTER TABLE documentation (artifact: b67cd6eb54 user: larrybr)

Thanks. The implied de-nit operation is complete, to be pushed to doc page in due course.

23:00 Reply: User-defined functions in the CLI? (artifact: a4ab71e57d user: larrybr)

Not now. In general, the library's C API is not exposed at the CLI's user interface. The CLI, in part, is a shim between {people and shell scripts) and that C API.

2021-09-25
21:57 Reply: V3.36 (artifact: 91119eff09 user: larrybr)

In the CLI ... tells me which databases are in scope in the current session (as Simon Slavin points out) ... but I see the alias as main and not the handle i.e. as returned by the out parameter of sqlite3_open(dbname, out handle).

What you (or the System.Data.SQLite .Net library, I suppose) are calling a "handle" is just that pointer to the (externally opaque) sqlite3 object. That .Net library has no way of knowing about any other "handle" it may have passed to its caller(s). The CLI (in recent versions) does track a small set of database connections (another term for references to those opaque objects) it has open, but does not traffic in raw pointer values at its external interface.

What you are calling an "alias" is the schema name, significant with just the connection which might have more than one database attached to the "main" one. This attachment is different from having a whole, separate connection.

To paraphrase, my original question was simply to figure out whether I could get the list of databases I've opened together with their corresponding handles ... programmatically

No.

It looks like I have to manage this within my application. Correct?

I think my answer to this was clear in post #2. It amounts to "yes" here.

21:40 Reply: V3.36 (artifact: ff0d088fdb user: larrybr)

(I spend a few minutes trying to find out how to iterate through the connections for an instance of SQLite and failed. Contributions welcome.)

My only contribution is this advice: Be sure to limit that effort to just a few minutes.

The nature of SQLite library "instances" is one clue to when that search should terminate. The library code will normally be limited to one copy per process when statically linked (without special renaming to avoid link-time collisions.) The instantiation of data managed by the library is precisely the allocation and initialization of the opaque (to applications) struct whose typename is "sqlite3". There are no instances broader than that, except that one could contemplate all such instances in a process, on a computer, in some user's processes, in the world, across the universe, etc. Those broader sets are not tracked in any way or related to each other by the library.

Given that the OP's inquiry was about "all opened databases", the obvious answer is "No" but that's too easy to just say.

21:00 Edit reply: V3.36 (artifact: 62c5ba45b0 user: larrybr)

Such information is not kept by the SQLite library. It should be easy for an application to keep track of this. (I'm not sure what a "database name" or "handle" are in this context, but can answer nonetheless because the library does not rely on in-process data other than what is kept in per-connection structs or in structs held by them.)

Also: Please choose more descriptive thread titles going forward. Yours is ambiguous at best and misleading at worst.

20:57 Reply: V3.36 (artifact: 7be6e06884 user: larrybr)

Such information is not kept by the SQLite library. It should be easy for an application to keep track of this. (I'm not sure what a "database name" or "handle" are in this context, but can answer nonetheless because the library does not rely on in-process data other than what is kept in per-connection structs or in structs held by them.)

15:16 Reply: Multi-Process Access (artifact: c7e49620b2 user: larrybr)

Yes. Look for "concurrency" in the keyword index.

15:09 Reply: cannot start a transaction within a transaction using sqlite shell (artifact: 960a6e5851 user: larrybr)

Thanks for the tips, Warren. I will follow up on this in about 4 weeks when I get back to that workstation to do the implied experiments.

WSL2 uses an actual Linux kernel with regular Linux filesystems.

That's why I was surprised at the failure, in part. The other part is that, at times, it has passed the whole suite of SQLite's TCL tests except for ones doing full precision FP value comparisons. This makes me think I must have switched that system back to WSL1 for some reason and forgotten to switch it back.

14:43 Reply: question about sqlite3_close_v2 call! (artifact: 59e65edb5c user: larrybr)

If either (1) Your single (and therefor auto-transacted) statements completed past the stepping phase (returning SQLITE_DONE), or (2) your multiple statement sets, bounded by transaction begin and end, completed (in the same sense), then the persistent data will be handled safely (where that means adhering to the ACID promise.) This includes effects upon the auxiliary files that SQLite uses to deliver on that promise.

The main penalty for leaving unfinalized statements lying around is memory leaks and reports thereof which responsible programmers generally heed and cure.

2021-09-24
12:06 Reply: cannot start a transaction within a transaction using sqlite shell (artifact: 6e57393920 user: larrybr)

Sorry, I've been using WSL2 since it was released. I'll have to check to see which version's behavior is set to be used. (I may have messed with it as I have a very non-default setup.)

As I said, it (whatever it is) is useful provided it limitations are acceptable.

00:29 Reply: Small Mistake in Delete Limit Railroad Diagram (artifact: 36855ffa1e user: larrybr)

Thanks for the tip. Fixed on trunk, to reach doc pages in due time.

2021-09-23
23:05 Reply: cannot start a transaction within a transaction using sqlite shell (artifact: 0ad542c2ce user: larrybr)

The root cause is likely that WSL does not work correctly and is insufficient for reliable use and should only be used for "entertainment purposes".

I have found it to be convenient and reliable for most of my day-to-day tasks that a pure Linux system would support. It can build SQLite and its docs, and run most of SQLite's TCL test suite with results identical to those on other platforms. (Tests doing full precision comparison of floating point values reliably fail, however.)

The Great Unwashed seem to commonly believe that these "entertainment quality only" systems are designed for ...

As a member of the occasionally washed, I have come to believe that WSL is good for more than entertainment but that there are limitations which make it worthwhile to keep my Linux and other Unix-like boxes ready for use.

I do not think there is anything that can be done other than to advise patients "Do not do that then".

Hmmmm. Applying some imagination, I can come up alternatives -- most relating to appropriately wary expectation. I admit to slight surprise when, after I encountered the apparent WSL file locking flaw whose effects are described above, my suspicion of it was so soon confirmed on a genuine Linux system. I had been led to believe that WSL Linux-like file system semantics were implemented more completely than they really are. In that respect, the fact that the Linux system calls which are functional on Linux just pretend to work on WSL puts it into the "toy" category where multi-process file access is concerned.

Henceforth, I will be limiting my use of WSL to situations where processes interact only through pipes, starting each other, and non-concurrent file access. It's still a useful tool, but one to be used with care by those not fond of blaming their tools for poor results.

16:48 Reply: Cannot select a value that matches another column name (artifact: 1e4ef9b397 user: larrybr)

I don't think it's a bug.

The remedy, which I refuse to call "a work around", is to use doublequote for identifiers and singlequote for SQL string literals.

I must confess that the stackoverflow original inquiry contains way too much noise for me to study it in detail,a but just sticking to SQL syntax seems to be enough here.


a. If somebody posing a question cannot bother to weed out superfluous aspects of their posed situation, why should anybody else bother to decide what is germane?

15:35 Reply: sqlite3_exec (artifact: a1edf9bd04 user: larrybr)

If you insist on combining SQL statements into a single string:
Look carefully at the sqlite3_prepare() family of APIs, in particular at the pzTail parameter and its effect.

Otherwise, put your SQL statements into separate strings.

15:32 Reply: sqlite not working on alpine arm32 image (artifact: 071ea602f1 user: larrybr)

What would be the first of 20 questions? (I ask rhetorically.)

Have you tried the build by following guidance at How to Compile SQLite?

If not, try that.

If so, how did that go wrong?

What is it that you are trying to build?

What does "not working" mean in terms of specific inputs, actual outputs, and your expectation regarding outputs?

One idea I have is: SQLite works on ARM and has for quite some time. So I imagine you are doing something differently that suggested at above link. With more information, I and other would have a basis to say what you are doing differently that leads to your disappointment.

More ↓