SQLite Forum


50 most recent forum posts by user doug9forester

14:42 Reply: Feature request: Stored Procedures (artifact: ac90c108f2 user: doug9forester)

The code generator could generate a static character string for the C code which contains the text of the SQL. Easy - peasy!

14:37 Reply: Question regarding dangling commas in CREATE VIRTUAL TABLE queries (artifact: e4b8892041 user: doug9forester)

Perhaps, "Strict" mode should not allow null column names?

06:18 Reply: Issues with sqlite3IsNaN() and HAVE_ISNAN (artifact: c6df25325e user: doug9forester)

Chris, see Larry's suggestion above.

00:42 Reply: Ah Charlotte, we've had a problem? (artifact: 81638c92a9 user: doug9forester)

I've had my share of schema changes which always seem to end up needing to fixed up manually here or there because I forgot this case or that case. I can conceive of a database of schema versions for a project, where each schema change is a step from one project version to the next. Of course, each step must include data transformations and change actions, perhaps in the form of SQL statements - updates, deletes, inserts - to move an existing database from one state to the next.

I can also envision a process where the changes are generated by comparing a new schema to the old and - with some (magic) rules, creates the SQL statements necessary to transform an existing database from the old schema to the new one. Creating the data transformations is a hard one.

How could SQLite help with this? We already have a way to identify the version of a schema with the User Code (provided the user hasn't used it for some other purpose). One can capture a database schema in another database with version information. Do it again with the manually created new schema, and a tool now has the information at had to compare the schemas. It would be ideal to have a "schema version" built-in, but that requires the knowledge that a database is part of a project - a concept beyond SQLite.

Creating the data transformations becomes the next problem. Is there anything more we need from SQLite to produce a tool that would assist a project to move from one schema to the next?

00:14 Reply: Issues with sqlite3IsNaN() and HAVE_ISNAN (artifact: 03f5954c85 user: doug9forester)

Just for interest, let's say I have a database on RISC OS where the bytes are swapped for the U64 (or the other one) fields. How would I go about creating a "correct" database file that works on systems other than the RISC?

20:01 Reply: LEFT JOIN Misunderstanding (artifact: 3259477d66 user: doug9forester)

Keith said: ``` An equijoin is a projection (join) of two tables where some item in one table is equal to some item in another table.


is an equijoin of tables A and B. ``` If the A->B relationship is one-to-many, are multiple rows returned? That is, if A.X matches multiple B.X's, is a row returned for each match? And what about many-to-many? And many-to-one? Without DISTINCT, what happens?

02:17 Reply: Better download file naming (artifact: 475d3c6dc5 user: doug9forester)

I think the date of the version should be in the CSV table, too.

00:03 Reply: Nothing for several days from the forum (artifact: 94c51e8063 user: doug9forester)

I'm getting emails from the forum again. What did you do to solve the problem? Pay the ransom?

15:14 Post: Nothing for several days from the forum (artifact: 7bf7dfedde user: doug9forester)

I have gotten nothing from the Sqlite forum since Monday. What happened? Did it go offline?

07:13 Reply: Serious problems with STORED columns (artifact: 3f7907a234 user: doug9forester)

I understand everything about STORED fields now - how to define them, how they are stored and retrieved to and from the database, how they are updated, and when to use them. Thanks everyone for that.

But I take issue with the words you use to explain them, specifically

So if you are talking "does the bytes go into the persistent storage?" then YES a STORED column is persistent in that sense, but in the sense that it is NOT the data you put there, and never will be, and the data it has is fleeting and will be updated on a whim by some other column that happens to be in its calculation expression, then NO, it is most certainly NOT persistent in that sense, it is derived.

The data saved in all database fields is most certainly the data intended to be there, even if you didn't give specific values. For example, when you specify time field with a value of "now()", you don't know what it is but your intention is clear. For STORED field, the intention is specified in the DDL. Even though you don't specify the value, the intention is clear. And it most certainly will not be "fleeting" nor updated on a "whim". It will be maintained and updated by the rules specified in the DDL.

And you might want to think about "derived" differently, as well. Every field in the database (except perhaps referential id's) is "derived" from something. That derivation is done inside the application, and result is saved in database fields. The value for a STORED field, on the other hand, is calculated and maintained rigorously by Sqlite using the DDL rules that the application specifies. You can label that "derived" if you want (or calculated or computed or ...), but the value stored in that field is exactly and precisely what the application intended.

22:16 Reply: Serious problems with STORED columns (artifact: aa913a775c user: doug9forester)

Is it logical to assume that any STORED value will not change if NO OTHER FIELDS it was calculated from changes? For example, can I change the algorithm for calculating a STORED field after inserting records, perhaps, by using an ALTER COLUMN command? I'm thinking it better be impossible to do that or the database can become inconsistent.

Also, the STORED value is written to the database, right? After all, the point of STORED is to not have to calculate it again, so it must be written out. And if it's written out, then it - by definition - is persistent - from the database's point of view. You can mess with Sqlite's definition of persistent all you like, but the data persists and shows up when fetched as written to the database. That is, when I insert a record with a STORED field, and read it back a year later, it's unchanged and not recalculated, right?

So why would I ever use a STORED field instead of a real (persistent) field if the physicals are identical?

17:34 Reply: Serious problems with STORED columns (artifact: ce9c92228e user: doug9forester)

I am now seriously confused. I assumed that "stored" meant persistent, but you've destroyed that assumption. What is the meaning of "stored" and when (and when not) should I use it?

20:52 Reply: Does table have rowid? (artifact: 213bc7d854 user: doug9forester)

I tried it and got this:

sqlite> select sqlite_version();
sqlite>    SELECT name,
   ...>           NOT EXISTS(SELECT 1 FROM pragma_index_info(x.name)) AS 'hasRowid'
   ...>      FROM sqlite_schema AS x
   ...>     WHERE type='table'
   ...>     ORDER BY name;
Error: no such table: sqlite_schema

Is it version dependent?

05:27 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: 4bbe68819e user: doug9forester)

Good point. If you want to always use the "auth source", how can SQLite help with that? Suppose you are displaying a list of xyz in your application, and someone adds (or deletes) members of xyz. What is the application supposed to do? It can't continually query the database in hopes of finding a change. Nor can it "listen" for a change and continually update the display of members, which would drive a user crazy.

So the single user design point works always when there actually is a single user. How can you actually design a user interface that faces continual (possible) updates in a multi-user environment? And can SQLite help with that?

18:49 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: f07dd8d5a5 user: doug9forester)

In the real world, applications which use use pulldown lists (sometimes nested), querying a database to gather the list (and marshal the values into a form required by the menu interface) is just too costly to performance. As soon as SQLite provides an interface that matches a SELECT query to an arbitrary in-memory structure, I'll be glad to give it a whirl. However, I think you'd agree that it's the application's responsibility to provide the data in a form that matches the interface, not the database's responsibility.

06:37 Reply: Reload WAL when modified by outside process (artifact: 3ce0c9de42 user: doug9forester)

Please point to a reference to "what is a replication tool". Thanks.

17:00 Reply: What is the best way to have a custom order? (artifact: 66b0f85f4c user: doug9forester)

I use a single direction linked list. This implementation requires a dummy entry for position 0 but works well for me.

CREATE TABLE teachertable (
teacher_displayname TEXT NOT NULL UNIQUE,
teacher_prev INTEGER NOT NULL REFERENCES teachertable(teacher_id)
I insert new rows in the table within a transaction which inserts a new row and updates the link to to put the item where it belongs.

06:56 Post: Request for change: support item lists with trailing comma (artifact: 7979bf1545 user: doug9forester)

I am generating SQL as part of C++ code generation for my application. I would like to not mess with trailing commas on lists of items in SQL statements. Specifically, I would like SQLITE to support the following syntax for the insert statement (see trailing comma in lists):

insert into tbl(a,b,c,) values(1,2,3,);
In version 3.31, SQLite does not support trailing commas in lists. C++ supports them in enum lists which makes life much easier for code generation. I found that SQLite did not support trailing commas thus:
sqlite> .version
SQLite 3.31.1 2020-01-27 19:55:54 3bfa9cc97da10598521b342961df8f5f68c7388fa117345eeb516eaa837bb4d6
zlib version 1.2.11
sqlite> create table x(a,b,c);
sqlite> insert into x(a,b,c) values(1,2,3);
sqlite> select * from x;
sqlite> insert into x(a,b,c,) values(1,2,3);
Error: near ")": syntax error
sqlite> insert into x(a,b,c) values(1,2,3,);
Error: near ")": syntax error

16:42 Reply: Visual Studio Compile Issue (artifact: 54499f0cd1 user: doug9forester)

This may not be your problem: I recently added a folder to the system PATH. The folder has tools in it that I have downloaded. Turns out that the folder tools conflicted with Qt's toolset so I had to remove the folder from Qt's path to get things to work.

06:50 Reply: BUG: Wrong line endings of shell.c prevent debugging on Windows (artifact: 56cd68bc4c user: doug9forester)

Does Qt interface with Fossil? It works great with git.

22:06 Reply: Typo in Pikchr documentations (artifact: 9e2336baa9 user: doug9forester)

Richard, I asked you for your process to make the diagrams for SQLITE back in June. You graciously gave me a link to your process. I downloaded the stuff at the end of the link and worked on it for at least a month trying to get it to work. Alas, I was not smart enough to follow all the myriad threads to all the various tools and stopped trying before success.

I still have the need to create diagrams to document my software. Pikchr looks simple to use, is self-contained, and has no dependencies. So I took a look at the documentation to learn about it. I was ready to use it, but (one more alas) could not find pre-compiled executables. Creating an executable from a .c file is simple on linux but not simple on Windows 10 (which is my platform of choice). I will download it and probably use it. I get tired of just one more learning curve.

When I figure out how to compile it, is there a way to give it back to Pikchr to place in their /files folder? It probably needs version control with version numbers... Anyway, thanks for pointing the way to a valuable tool for my tool chest.

21:08 Post: Typo in Pikchr documentations (artifact: 95d5b79905 user: doug9forester)

In the Pikchr documentation webpage (https://pikchr.org/home/doc/trunk/doc/integrate.md) there is a typo in section "Flags passed to pikchr()": "... in the even of an error ..." should be "... in the event of an error ...".

15:38 Reply: insert a uuid:string '0Xcd...' -> 1/logic-error near "0Xcd" , syntax error (artifact: 3b063938b4 user: doug9forester)

I find it strange that the query string reuses the single quote. No parser could handle this: query '....'..'..'..'..'..'..' What's up with that? Do you need to escape single quotes within your query quoted string?

03:19 Reply: CTEs in triggers (artifact: ecfa037464 user: doug9forester)

I disagree vigorously with the implementation of Sqlite which allows one to use SQL constructs which are "not supported". I think SQL is sufficiently complex that I should not have to pull out a manual (or search deeply) every time I use a construct. The parser/interpreter should surface an error when a "not supported" construct is used. To say it's on me to know everything there is to know about Sqlite's version of SQL, is wrong.

19:17 Reply: How to implement a table working like a ring buffer (artifact: 479d97d787 user: doug9forester)

I've been watching this thread and thought I would offer an observation from the cheap seats. You seem to be shot-gunning your approach to this problem. Whenever I do that, I leave holes in unexpected places.

I would suggest that you devise a set of test cases, especially paying attention to the boundary conditions (none, one, up to 7 if you are using hardcoded limits like 6) and the boundary values (less than, duplicates, greater than). Wouldn't take much effort to put a pretty exhaustive set of tests together and would pay bundles of time and energy down the pike.

My $0.02. Doug

21:28 Reply: SQLITE_ENABLE_STAT4 triggers failures in analyze3.test (artifact: e7dda3e161 user: doug9forester)

OK, it's a lot of tests. But the real number of permutations is a lot less than 87! because of mutual exclusion. Still a lot!

Just throwing this out: for several years I used my computer to run a SETI program analyzing data streams looking for alien signals. Literally millions of computers put all their idle time into the analyses.

What if we could use the same technology to examine SQLITE code and compile options. What would be the worthiness of such an endeavor? Probably we might find a few compiler errors. Or, we might discover some dead code if a certain combination of compile options were selected. Since the code quality is already very high (very few bugs), I would not expect the uncover many bugs.

But, if you tied the options to code segments and code segments to SQL statement pieces, then you could automate the test case generation. You could visualize the code in new and different ways.

We might start by partitioning the compiler options into groups of 5 or 10 that have something in common. Then dole each group out to 100 or a 1000 computers and let them crunch. Pull the results back and see what they say, what there is to learn.

The partitioning of the code is easy for a given set of compiler options. We have tools all over the place to allow us to gen deltas. If we use the delta generators in a new way we can gen a set of deltas for code changes for a single compiler option change. And by being smart about where in the code compiler options make a difference, we can reduce the permutations. We would find out rather quickly which compiler options interacted with each other. And which ones interacted with each other in strange and wonderful ways.

We can use SQLITE (of course) for the database underlying the analysis. You only need a pre-compiler to do this analysis (if you trust that all the various pre-compilers will generate the same code). How to tie the SQL to the code? We use Lemon for parsing, right? Take the output of Lemon and relate it to code segments. Since the code segments are influenced by the compiler options, one can link Lemon output and compiler options.

We need a body of SQL to drive this analysis. Collecting a unique set of SQL statements is another 87! problem which can be partitioned like compiler options can. I don't know lots of things about lots of things, but my application world revolves around database. And I'd like to understand it a lot better.

16:17 Reply: SQLITE_ENABLE_STAT4 triggers failures in analyze3.test (artifact: 8804c00d8d user: doug9forester)

Dan (just for my interest): When you are made aware of a problem with "don't have a config that tests with that permutation", do you always create a new test case to cover it?

17:48 Reply: how to check update statement really modify a record (artifact: cf1a8a06e8 user: doug9forester)

Stephen, Why do you mention "custom collations"? For "set x=..", collation doesn't come into play, does it? The test would be "old.x == new.x". Collation doesn't apply to equality comparisons, does it?

14:46 Reply: how to check update statement really modify a record (artifact: fec968b57c user: doug9forester)

Simon, I'm an now confused. I assumed that my post earlier about updating a value with the same value would get some response. I assumed that the change count would get incremented if you "UPDATE" a field with the same value it had.

You seem to be saying that an update will not happen if the value will not change. And therefore the change count will not increment. Is that true?

16:01 Reply: how to check update statement really modify a record (artifact: 991140f513 user: doug9forester)

I assume a "change" using UPDATE ... means that the appropriate values were set to the values in the UPDATE statement, and not that the values were "changed". That is, if the current value of a field is 11 and your UPDATE sets it to 11 (the same value), a "change" will be recorded. There is nothing in the docs specifying that the value changed enters into the calculation.

15:49 Reply: .selecttrace unknown and strange errors (artifact: 6c75dcc81b user: doug9forester)

Anonymous, I think your design has a major flaw: it is not provable to be correct. Your use of complex triggers which cannot be verified by reading them or running them seems to me to be dangerous to the integrity of your application.

My experience has led me to perform the trigger functionality at the application level where it is simple and verifiable. Get rid of the triggers.

If you generate different queries which test the conditions which your triggers are triggered on, then you can run the right insert or update query at the application level which handles the condition. The flow through the application is testable and not asynchronous and obtuse, as triggers are. And, of course, you can wrap the whole set of queries in a transaction for data integrity.

My two cents...

14:37 Reply: Problem with Sqlite DELETE documentation (artifact: 6ae8bcbb00 user: doug9forester)

I understand the arbitrary order in which selected-for-delete rows are actually deleted. The last sentence of the paragraph doesn't include the phrase "selected-for-delete row". Adding that changes dramatically the meaning of the sentence. I suggest this change: ` The order in which rows which are selected for delete are deleted is arbitrary and is not influenced by the ORDER BY clause.

14:10 Post: Problem with Sqlite DELETE documentation (artifact: f49ad6b35f user: doug9forester)

I was reading the Sqlite syntax diagram re Delete and found a paragraph which I think should be stricken. Here is what the text has to say about Delete. The last paragraph doesn't make sense in light of the first 3 paragraphs:

If a DELETE statement has a LIMIT clause, the maximum number of rows that will be deleted is found by evaluating the accompanying expression and casting it to an integer value. If the result of the evaluating the LIMIT clause cannot be losslessly converted to an integer value, it is an error. A negative LIMIT value is interpreted as "no limit". If the DELETE statement also has an OFFSET clause, then it is similarly evaluated and cast to an integer value. Again, it is an error if the value cannot be losslessly converted to an integer. If there is no OFFSET clause, or the calculated integer value is negative, the effective OFFSET value is zero.
If the DELETE statement has an ORDER BY clause, then all rows that would be deleted in the absence of the LIMIT clause are sorted according to the ORDER BY. The first M rows, where M is the value found by evaluating the OFFSET clause expression, are skipped, and the following N, where N is the value of the LIMIT expression, are deleted. If there are less than N rows remaining after taking the OFFSET clause into account, or if the LIMIT clause evaluated to a negative value, then all remaining rows are deleted.
If the DELETE statement has no ORDER BY clause, then all rows that would be deleted in the absence of the LIMIT clause are assembled in an arbitrary order before applying the LIMIT and OFFSET clauses to determine the subset that are actually deleted.
The ORDER BY clause on a DELETE statement is used only to determine which rows fall within the LIMIT. The order in which rows are deleted is arbitrary and is not influenced by the ORDER BY clause.

03:51 Reply: table locked when dropping a temp table (artifact: 81bc905a90 user: doug9forester)

I carved your data up using Excel, sorted by statement number, and got this: ``` Stmt Command SQL 1 Prepare PRAGMA cache_size = 10000 1 Execute prepared 1 Prepare PRAGMA show_datatypes = ON 1 Execute prepared 1 Prepare PRAGMA foreign_keys = 1 1 Execute prepared 2 Prepare select * from variables order by orden 2 Execute prepared 3 Prepare PRAGMA table_info('variables') 3 Execute prepared 4 Prepare select * from variables 4 Execute prepared 5 Prepare select * from tipocampo 5 Execute prepared 6 Prepare PRAGMA table_info('tipocampo') 6 Execute prepared 7 Prepare select * from campos order by id 7 Execute prepared 8 Prepare PRAGMA table_info('campos') 8 Execute prepared 9 Prepare select * from valores where campo=? order by orden 9 Bind prepared 9 Execute prepared 9 Bind prepared 9 Execute prepared 10 Prepare PRAGMA table_info('valores') 10 Execute prepared 11 Prepare select * from campos 11 Execute prepared 12 Prepare select * from plcs 12 Execute prepared 13 Prepare PRAGMA table_info('plcs') 13 Execute prepared 14 Prepare select numero,color_fondo, color_letra from casillas where pantalla=? and numero>0 and numero<=? order by numero 14 Bind prepared 14 Execute prepared 14 Bind prepared 14 Execute prepared 14 Bind prepared 14 Execute prepared 14 Bind prepared 14 Execute prepared 15 Prepare PRAGMA table_info('casillas') 15 Execute prepared 16 Prepare select * from plcs 16 Execute prepared 17 Prepare select * from pantallas order by numero 17 Execute prepared 18 Prepare PRAGMA table_info('pantallas') 18 Execute prepared 19 Prepare select * from casillas where pantalla=? and numero>0 and numero<=? order by numero 19 Bind prepared 19 Execute prepared 19 Bind prepared 19 Execute prepared 20 Prepare select * from casillas where pantalla=? and numero=0 20 Bind prepared 20 Execute prepared 20 Bind prepared 20 Execute prepared 21 Prepare BEGIN 21 Execute prepared 22 Prepare PRAGMA defer_foreign_keys = on 22 Execute prepared 23 Prepare UPDATE TriggerControl set enabled=0 23 Execute prepared 24 Prepare DROP TABLE IF EXISTS x 24 Execute prepared 25 Prepare CREATE TEMP TABLE x AS SELECT * FROM variables WHERE orden>=? 25 Bind prepared 25 Execute prepared 26 Prepare UPDATE x SET orden=orden+1 26 Execute prepared 27 Prepare DELETE FROM variables WHERE orden>=? 27 Bind prepared 27 Execute prepared 28 Prepare INSERT INTO variables SELECT * FROM x 28 Execute prepared 29 Prepare DROP TABLE x ?? Execute prepared

``` It helped me see what you are doing a little more clearly, but I didn't see a reason for the locked table. (It would be nice to know which table is locked.) When I examined the sequence, statement 14 caught my eye. It has 2 bind parameters in the SQL, and you call Bind 4 times. That's weird. That should have surfaced an error, I think. Are you checking for errors on each call?

14:38 Reply: Decimal128 (artifact: cf735cb9ca user: doug9forester)

There is another problem with rounding when dealing with currency. When you create a journal entry in a double-entry bookkeeping system, calculating the individual entries by rounding often leads you a penny or two off for the total. So you have to add that discrepancy to an arbitrary entry to get the journal to balance. And then, if someone watching is really crazy, you've got to give the discrepancy to a different account the next month, etc.

Just saying...

07:42 Reply: Decimal128 (artifact: 34eee56332 user: doug9forester)

Rounding issues:

decimal_add('1.33','1.12',1) = '2.45'-> one of '2.4' or '2.5'
decimal_div('4.55','1.0',1) = '4.55' -> one of '4.5' or '4.6'
What are the rounding rules? (are we dredging up the FP discussion or what? :))

17:02 Reply: Are result values always the same? (artifact: 889ecf5dfe user: doug9forester)

Keith and Wout, thanks for the suggestions. The field pl_id is a primary key (and thus unique). However, the table is tiny. There will never be more than a handful of rows in it and most times just one or two. (It's the list of folders in which there are playlists that the user might want to play.)

So, given that it's tiny, that means that every update will cause the whole table to be written for every update. If the index you suggest exists, is it in the same physical record as the table? If it's in a separate record, then there is more I/O.

Just for interest, knowing that I/O is going to happen to ensure commit integrity, and that the table will fit in one record, what is the sequence of I/O operations to accomplish these queries, and is there any way to minimize the number of them?

update pltable set checkbox = (case when pl_id == ?1 then 1 else 0 end)
where pl_id == ?1 or checkbox == 1;
and this one:
update pltable set checkbox = (case when pl_id == ?1 then 1 else 0 end);

20:21 Reply: Are result values always the same? (artifact: 494b99ab01 user: doug9forester)

Warren, No, I meant it like I wrote it. Every row gets updated with a value of 0 or 1 (or whatever value Sqlite decides to put in for the result of the logical expression). Only the one whose id is passed to the query gets a 1; all others get zero.

17:48 Post: Are result values always the same? (artifact: 12ec6b752b user: doug9forester)

Given the following statement which sets the checkbox 'on' for a given pl_id and sets it 'off' for all others:

UPDATE pltable SET checkbox = (pl_id=?);
where the bind value is the id of a row in the table.

I tried it with sqlite3 and found that the value of checkbox is always 0 or 1 depending on the match or not.

Are the result values for this operation always 0 or 1? Does the SQL standard say so or is it implementation dependent? I mean, the result could be 0 and 99, right?

Also, should I define an index for this kind of query? If so, what should it be?

21:30 Reply: Help with foreign key (artifact: 1cddf371f4 user: doug9forester)

Each table has a primary key which makes a unique identifier for each row in that table. The reference for a foreign key means that the value in one table refers to the row in the other table identified by the primary key. But you have to tie the two values together, as: CREATE TABLE artist( artistid INTEGER PRIMARY KEY, artistname TEXT ); CREATE TABLE track( trackid INTEGER PRIMARY KEY, trackname TEXT, trackartist INTEGER REFERENCES artist(artistid) ); If you had used the same name of the column in both tables, you could leave out the foreign key column name.

15:37 Reply: DROP VIEW IF EXISTS failure (artifact: 473dd4c8d7 user: doug9forester)

I have been following this discussion as an amicus curiae. I might be able to help solve your problem, but I need some more information about your system as a whole.

Presumably, you are generating the SQL which contains "DROP VIEW IF EXISTS..." semi- or fully-automatic. otherwise, you would know what exists and what doesn't and the issue would be moot.

Presuming you are generating the SQL, is there a way to make that process smarter? For example, do you control of you database such that you could add your own (look-aside) table to track the existence of views and tables? And then use that information to generate the SQL. The generated SQL could update the look-aside table so the next time you generated the SQL, it could be smarter.

For example, here is some sample generated SQL:

INSERT INTO lookaside (tname, ttype) VALUES('t1','TABLE');
INSERT INTO lookaside (tname, ttype) VALUES('v1','VIEW');
INSERT INTO lookaside (tname, ttype) VALUES('v2','VIEW');
DELETE FROM lookaside WHERE tname='v2';   -- smart gen knows its a view
Your SQL generator can check "lookaside" to find out what exists and what doesn't and generate the correct SQL for each case.

21:31 Reply: Odd buffer overflow (artifact: ec517a2646 user: doug9forester)

Please post all of your SQL so we can try it.

01:14 Reply: java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__atomic_store_4" (artifact: 58860d5c5c user: doug9forester)

Did you mean to say "...GCC_VERSION..."?

18:00 Reply: What version of Sqlite am I using? (artifact: 42d03d7cce user: doug9forester)

Seeing as how Qt 5.14.1 was supposed to have Sqlite 3.30.1 and in reality had version 3.28.0, I will believe it when I see it.

17:56 Reply: What version of Sqlite am I using? (artifact: 17c5e54dd9 user: doug9forester)

Sorry to report that what you see is not what you get. When I use

SELECT sqlist_version();
at run time, I get version "3.28.0". My version of Qt Creator is 5.14.1.

05:57 Reply: What version of Sqlite am I using? (artifact: 05545ecde1 user: doug9forester)

Jens, Sqlite is buried in Qt for which I haven't downloaded the source.

"select sqlite_version();" worked fine to get my answer.

05:54 Reply: What version of Sqlite am I using? (artifact: 1073db4ccb user: doug9forester)

Thanks, Larry. I am on 3.28.0, which is the reason GENERATED fields are not recognized. Sigh... and it was such a good solution.

I don't really want to go through the learning curve of installing my own version of Sqlite in Qt. I don't know when Qt will get to 3.31. Is that information visible to Sqlite development?

21:16 Post: What version of Sqlite am I using? (artifact: bc302e4eaf user: doug9forester)

I am using Qt 5.14.1 for my application. Under the covers I am using Sqlite for my database. I need to find out what version of Sqlite is installed for it. How can I do this?

17:57 Reply: Index question 2 (artifact: c05a383d49 user: doug9forester)

Keith, that seems broken to me. It is required to specify the table name when you define an index. Therefore, the index already has an affinity with a table. The scope should be a table instead of a database.

Is there some reason why it is scoped thusly?

14:40 Post: Index question 2 (artifact: b348d45c09 user: doug9forester)

What is the scope of an index name? Is it system-wide or table-wide? In other words, do I have to ensure unique names for the database, or just my table?

More ↓