SQLite Forum

Timeline
Login

50 most recent forum posts by user cuz

2022-01-27
22:40 Reply: Doc: Lack of punctuation leads to confusion (artifact: 08b9897156 user: cuz)

Had a chuckle on this observation - reminded me of the old programmer joke:

You know your partner is a programmer when...
You write in the shopping list: "Bring 2 loaves of bread, and if they have eggs, bring a dozen.", to then find the diligently returned shopping basket contains 12 loaves.

As to the suggested change, I would even go so far as to prefer the sentences to read:

The right-hand operand of an IN or NOT IN operator has no affinity if the operand is a list. If the operand is a SELECT, it has the same affinity as the result-set expression.

But that is the most minor nit and any clear statement would honestly be fine.

2022-01-26
16:36 Reply: empty DB created if DB file missing - is this inherent to SQLite (artifact: fd18c3296b user: cuz)

Ok, we are not saying you should sod off, just that frequency of helpers are probably higher on C# or whatever platform you are using.

Where are you directed for documentation on your wrapper? Have you found anything there on opening of SQlite DBs? Did it fail to answer the question?

If so, feel free to post the section from the documentation that is relevant and say how it is unclear or what remains to be answered, that way someone here can fill in the gaps or perhaps even non-wrapper users (like myself) could offer advice or figure out what would make it work for you - or you could just wait until a System.Data.SQLite user rolls by (in case RandomCoder's reply isn't already enough).

15:11 Edit reply: empty DB created if DB file missing - is this inherent to SQLite (artifact: 472d80d5e3 user: cuz)

Your question was:

..."SQLiteConnection does not conttain a definition for Open_V2()". Is this a question that could be answered in this forum?

Answer: It is not impossible, there are very many SQLiters here and a broad range of wrapper users, but a forum of specifically the wrapper/platform you are using is much more likely to produce more and better answers.

What we can tell you, and what Larry tried to, is that SQLite itself can very easily do what you ask, it is part and parcel of its open_v2 capabilities. That said, the wrapper you use may have some way of communicating those same flags through to SQLite's open command (which it MUST call internally to actually open a DB), but we do not know how exactly to communicate that by using your wrapper since we (or most of us) know nothing about it, and that is what you may find more readily answered on a forum of people who also use it.

What Larry further believes (and I'm inclined to agree) is that your wrapper would very probably have documentation to enlighten that very question, so starting there (now knowing what you are looking for) is almost smarter and faster than hopping to the next forum.

15:10 Reply: empty DB created if DB file missing - is this inherent to SQLite (artifact: 513b9d8694 user: cuz)

Your question was:

..."SQLiteConnection does not conttain a definition for Open_V2()". Is this a question that could be answered in this forum?

Answer: It is not impossible, there are very many SQLiters here and a broad range of wrapper users, but a forum of specifically the wrapper/platform you are using is much more likely to produce more and better answers.

What we can tell you, and what Larry tried to, is that SQLite itself can very easily do what you ask, it is part and parcel of it's open_v2 capabilities. That said, the wrapper you use may have some way of communicating those same flags through to SQLite's open command (which it MUST call internally to actually open a DB), but we do not know how exactly to communicate that by using your wrapper since we (or most of us) know nothing about it, and that is what you may find more readily answered on a forum of people who also use it.

What Larry further believes (and I'm inclined to agree) is that your wrapper would very probably have documentation to enlighten that very question, so starting there (now knowing what you are looking for) is almost smarter and faster than hopping to the next forum.

12:23 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 64965c9e3b user: cuz)

Warren, I'm sorry, that is the longest disseration defending a viewpoint to which I agreed to be wrong in the previous post already - what more do you want me to say? "I profusely apologize for being so very wrong master..."?

Most of my post (and certainly the links) was about the latest intended use for the folders, not the name.

So in the interest of avoiding any further backlash, let me be extremely clear about the statements I am making so that if there is any disagreement (which is welcome), that it should be on one of these points:

  • I was thinking wrong initially - /usr does not mean "Unix System Resource".
  • I believe Warren is correct - /usr was originally meant to be for any of the User's data, whatever the acronym (or sentiment) has become lately
  • Lately /usr is treated like a system folder, and although user-programs store some read-only types of data in there, the idea is to store any variable data (read-write stuff) in /tmp and /var rather (depending on longevity required).
  • This I believe is the "idea" on all distros I've used lately, but not enforced as it seems to be on some.
  • The OP is advised to "play ball" and not use /usr but indeed /var for the DB in question for best results universally across distros.
11:21 Edit reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 94eb0a7525 user: cuz)

The rules/roles do indeed change over time.

Not sure stackexchange is an authority on any matter, but a cursory simple search of the authoritative sources basically all claim to be non-authoritative, thus I stand corrected on the name (I did say "I think"). It seems "User System Resources" is the prevailing identity, though you can find many views out there, but the sentiment remains, it is not for variable data, it is for typically User-side read-only System Resources (at least in modern times).

For more clarity, in case of interest - This from the Linux Document Project: tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.html

And the Wikipedia on Folder Hierarchy: en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

11:20 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: d232fad1b0 user: cuz)

The rules/roles do indeed change over time.

Not sure stackexchange is an authority on any matter, but a cursory simple search of the authoritative sources basically all claim to be non-authoritative, thus I stand corrected on the name (I did say "I think"). It seems "User System Resources" is the prevailing identity, though you can find many views out there, but the sentiment remains, it is not for variable data, it is for typically User-side read-only System Resources (at least in modern times).

For more clarity, in case of interest - This from the Linux Document Project: https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.html

And the Wikipedia on Folder Hierarchy: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

10:45 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 788f9effca user: cuz)

I've seen a lot of people (even long-time Linux users) mis-attribute the /usr folder as a "User" folder, which is why thinking it will not be under the umbrella of system protection is prevalent, but in fact it is very much a Linux system folder - if memory serves, "usr" = "Unix System Resource", which is why Warren said that it is really not the correct folder for these kinds of files.

While /usr is ok to add setup/static info into, config info should reside in /etc and any variable-data/DBs/logs/scratch-pad stuff should really be in /tmp and /var (variable-data-size folder I think).

These rules are not set in stone (at least not on all distros), but it will make it easier for your files to work as expected if you follow these preferences.

2022-01-24
14:21 Reply: >= and <= (or between potentially) query (artifact: c297ae253c user: cuz)

Just out of interest, could you explain what is the purpose of prefix_start and prefix_end columns?

Should you not simply have a prefix column? Do they ever differ for the same record?

Do you mind posting the output of this query:

SELECT * FROM Country LIMIT 10;

It will give us great clarity and we might have a suggestion how you can improve usage and/or speed for your queries or use case.

14:13 Edit reply: >= and <= (or between potentially) query (artifact: 64ff5392cb user: cuz)

Select Country_Name FROM Country WHERE Prefix_Start >= "A4A" and Prefix_End < "A4A"

Where you are going wrong is very easy: You are asking for values that are both equal-or-larger than A4A and also smaller than A4A, that set is empty and any other expectations is a physical impossibility in the known universe.

Perhaps you mean:

Select Country_Name FROM Country WHERE Prefix_Start >= 'A4A' and Prefix_End < 'A4Z'

Note also the single-quotes, which are the real string-quoting characters in SQL.

14:12 Reply: >= and <= (or between potentially) query (artifact: 0f34b5d091 user: cuz)

Select Country_Name FROM Country WHERE Prefix_Start >= "A4A" and Prefix_End < "A4A"

Where you are going wrong is very easy: You are asking for values that are both equal-or-larger than A4A and also smaller than A4A, that set is empty and any other expectations is a physical impossibility in the known universe.

Perhaps you mean:

Select Country_Name FROM Country WHERE Prefix_Start >= 'A4A' and Prefix_End < 'A4Z'

Note also the single-quotes (which are the real string-quoting characters in SQL.

13:27 Reply: binding by reference instead of value (artifact: 31435d1bec user: cuz)

Well that already works like that for blobs, they do not bind like the rest, you have a blob writer and reader function which directly writes those bytes where they should go (i.e. no internal memory cache exists for it like other bound values, as far as I can tell).

Your argument can still work for, say, a very large TEXT value bound with one of the text binding functions, but that is the tiniest percentage of niche binds that happen, still if that is ALL your application does, I can see the implied need, but that will only help memory usage a bit (which isn't really constricted on most systems) and the actual writing I/O bottleneck will be umpteen times more responsible for any delays than moving the memory. No real gain here.

... it would [be] the applications responsibility to ensure that the values don't change until...

That is an easy statement to make, but a much less easy rule to adhere/enforce/test for in reality, especially with the gain being insignificant.

11:19 Reply: binding by reference instead of value (artifact: 44fd34bc83 user: cuz)

I think what the OP (and subsequent posters) are talking about is a binding where only the reference persists, but the value may change.

i.e. if they do some pseudo-code like this:

  int myValue = 5;
  sqlite3_bind_integer(statement, 1, pointerTo(myValue));
  sqlite_step(); // At this point the SQL executes with the bound value = 5
  ...
  myValue = 7;
  sqlite_step(); // Here the same executes without rebinding but now the bound value = 7
  ...
  etc.

This is a fool's errand though and I cannot see how this will ever work well in SQLite3. To name but a few initial concerns:

  • SQLite now has to keep an array of bound pointer references in memory, over and above the memory allocated for the physical values.
  • What if some parameters are bound by reference, but some by value, should re-binding "some" values work?
  • What if the referenced memory changes right inside SQLite's routines, who takes responsibility for that memory being accessible and not altering it ever? SQLite's internal multi-thread semaphores can never control for this.
  • What do we really gain? No real efficiency is saved, no real cpu cycles, but perhaps some programmer typing can be saved.
  • Adding a lot of new code to achieve something that is inherently simple to do currently, will require added testing and this seems like a great target for trouble and new vulnerabilities.

Add to that the fact that it is so easy to emulate yourself in your own code. Just make a bind function that has all your references, binds it and step, so you can easily have this pseudo code in your main:

  function int bind_and_step() 
      for n in bindProperty[]
          sqlite3_bind_int(statement, n, int(bindProperty[n]));
      return sqlite3_step();

  ...

  int myValue = 5;
  bindProperty[0] = pointerTo(myValue);
  ...
  bind_and_step(); 
  ...
  myValue = 7;
  bind_and_step;
  ...
  etc. etc.
  

Here the programmer's bind_and_step() procedure and changing of any of the bound values take full responsibility for the memory so assigned/dereferenced, neatly putting the legion of possible problems squarely into the domain of the programmer's code and away from SQLite.

What is more, if the function also took care of return values and errors, the resulting main section can have even less typing than if SQLite was doing this internally.

2022-01-19
00:29 Reply: pragma table_info confusion by multiple attache databases (artifact: 9ef33a942d user: cuz)

The function does not have a schema, or put differently, all functions can be called from all schemas. It's only the table that has a schema.

i.e. try:

a:select * from  pragma_table_info('TEST', 'arch');
etc.

Which should make it obvious, and to see what is meant with "all functions are available in all schemas", note that:

select * from  pragma_table_info('TEST', 'arch');
select * from  arch.pragma_table_info('TEST', 'arch');
select * from  store.pragma_table_info('TEST', 'arch');

are all 100% equivalent in meaning.

2022-01-18
15:49 Reply: WAL with Read Only Access (artifact: 1fbc41a772 user: cuz)

The previous replies are already correct, but it seems you may be missing some basics, which I will try to fill in here.

When you open an SQLite database in read-only mode, it doesn't change the database into some form of "read-only" database, it merely means that you promise (and can be held to the promise) that you will will not try to make changes to the DB from THIS connection you are making.

It says nothing about any other connection or the DB itself. It's like when you go to a shop and you tell the shopkeeper that you promise to not buy anything while you are inside, and the shopkeeper says Ok, but then leave your wallet/cash/cards outside please, now the shopkeeper knows for a fact you cannot buy anything, but you can look around all you want. This does not mean that the shop will stop selling stuff, it's still open for business to other people. It also does not prevent anyone else from using the shop, they can still use it and some of them might still buy stuff from it.

it looks strange that there is no possibility to open db file directly from read only access location/directory.

There is no way to make an SQLite DB file become "read-only", other than perhaps making the OS / Folder treat it as read-only, but that's like shutting the shop's doors - now nobody can use it. Even if the DB is opened in read-only mode, the connection must still write locks or WAL logs to simply have read-transactions on the DB file.

If you really have a use-case for a read-only DB, then store the DB file in a read-only location, and before using it from a new connection, first copy the DB file to a temp folder (with read-access), OR serialize it into memory then open that temporary/memory DB and read it to your heart's content. Once done, kill the file or the release the memory and all is back as it was before.

2022-01-15
14:16 Reply: Release db file (artifact: 36b90dc960 user: cuz)

Adding to what Simon said, I have done this a few times (made things that create an SQLite DB file and then has to kill/delete the DB).

Few things to keep in mind:

  • It's pretty straight forward - Open the connection, close the connection, delete the file.
  • Avoid journal modes WAL and TRUNCATE for this (else you might have lingering journal files still)
  • Make sure you do not have a second connection or connection pool which is not closed at the time of trying to delete the file
  • Make sure you don't try to "inspect" your work using another external DB viewer (I typically use SQLiteSpeed alongside to see if my program is doing the right things in the DB), but that keeps an open connection to the DB and as long as that exists you cannot delete the file from your program, even if it owns the file.
2022-01-14
14:34 Reply: use of "notnull" and "unique" in column names (artifact: 761b686282 user: cuz)

Do you mind clarifying exactly which version of SQLite is doing this?

I cannot reproduce the problem on my side - here is the same query you mentioned, first with the good quoted column name and second with the unquoted column name.

The second one fails as it should, but it does so immediately and with a sane error and error report - nothing about being "out of memory":

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
  -- ================================================================================================

SELECT cid, name, type, "notnull", dflt_value, pk
  FROM pragma_table_info('language_list');

  -- cid|name           |type     |notnull|dflt_value       | pk
  -- ---|---------------|---------|-------|-----------------|---
  --  0 |id             |INTEGER  |   0   |NULL             | 1 
  --  1 |iso639_1_code  |TEXT     |   1   |NULL             | 0 
  --  2 |language_key   |TEXT     |   1   |NULL             | 0 
  --  3 |change_cookie  |INT      |   1   |0                | 0 
  --  4 |language       |TEXT     |   1   |NULL             | 0 
  --  5 |native_name    |TEXT     |   1   |NULL             | 0 
  --  6 |added          |NUMERIC  |   1   |datetime('now')  | 0 



SELECT cid, name, type, notnull, dflt_value, pk
  FROM pragma_table_info('language_list');

  -- ------------------------------------------------------------------------------------------------

  -- 2022-01-14 16:26:21.790  |  [Info]       Script failed - Rolling back...
  -- 2022-01-14 16:26:21.790  |  [Success]    Transaction Rolled back.
  -- 2022-01-14 16:26:21.790  |  [ERROR]      Failed to complete: 
  -- -------  DB-Engine Logs (Contains logged information from all DB connections during run)  ------
  -- [2022-01-14 16:26:21.744] APPLICATION : Script D:\Documents\SQLiteAutoScript.sql started at 16:26:21.744 on 14 January.
  -- [2022-01-14 16:26:21.789] ERROR (1)   : near "notnull": syntax error in "SELECT cid, name, type, notnull, dflt_value, pk
  FROM pragma_table_info('language_list');"
  -- [2022-01-14 16:26:21.789] QUERY       : Failed Query: SELECT cid, name, type, notnull, dflt_value, pk    FROM pragma_table_info('language_list');  
  -- [2022-01-14 16:26:21.789] INTERFACE   : DB Error: near "notnull": syntax error
  -- ================================================================================================

2022-01-13
11:27 Edit reply: csv extension bug ? (artifact: 1cf93647ed user: cuz)

I'm not 100% sure what is being done here, but a cursory glance makes it clear your escaping is not correct or at least, not consistent.

Try this and see if it works:

const char * sql =
"CREATE VIRTUAL TABLE csv_tab USING "
"csv(header = True, filename = 'c:/tab.csv', "
"schema = 'CREATE TABLE csv_schema ("
"some_date DATE NOT NULL,"
"some_str1 TEXT COLLATE BINARY DEFAULT '''',"
"some_str2 TEXT COLLATE BINARY DEFAULT '''',"
"some_flt DOUBLE"
"); ');";
11:24 Reply: csv extension bug ? (artifact: a5866373cc user: cuz)

I'm not 100% sure what is being done here, but a cursory glance makes it clear your escaping is not correct or at least, not consistent.

Try this and see if it works:

const char * sql =
"CREATE VIRTUAL TABLE csv_tab USING "
"csv(header = True, filename = 'c:/tab.csv', "
"schema = 'CREATE TABLE csv_schema ("
"some_date DATE NOT NULL,"
"some_str1 TEXT COLLATE BINARY DEFAULT '''',"
"some_str2 TEXT COLLATE BINARY DEFAULT '''',"
"some_flt DOUBLE'"
"); ');";
2022-01-12
19:10 Reply: SELECT with LIMIT OFFSET clause return unexpected result (with sample db and query) (artifact: 246acd3c9b user: cuz)

Confirmed a bug.

Here is a minimal Query to reproduce:

WITH dt(cdate) AS (
    SELECT '2021-01-01' UNION ALL
    SELECT '2021-01-01'
)
SELECT CASE WHEN
       (SELECT DISTINCT date(cdate)
          FROM dt
         ORDER BY date(cdate) DESC
         LIMIT 1 OFFSET 10
       ) IS NULL THEN 'Pass' ELSE 'FAIL' END AS Test
  FROM dt;

  -- Test  
  -- ------
  -- FAIL  
  -- FAIL  



The problem seems to be with the ordering being DESC and the two dates being exactly the same.

Here are two similar examples, one with the ordering being ASC in stead of DESC, and the next with the dates being dissimilar:


WITH dt(cdate) AS (
    SELECT '2021-01-01' UNION ALL
    SELECT '2021-01-01'
)
SELECT CASE WHEN
       (SELECT DISTINCT date(cdate)
          FROM dt
         ORDER BY date(cdate) ASC
         LIMIT 1 OFFSET 10
       ) IS NULL THEN 'Pass' ELSE 'FAIL' END AS Test
  FROM dt;

  -- Test  
  -- ------
  -- Pass  
  -- Pass  



WITH dt(cdate) AS (
    SELECT '2021-01-01' UNION ALL
    SELECT '2021-01-02'
)
SELECT CASE WHEN
       (SELECT DISTINCT date(cdate)
          FROM dt
         ORDER BY date(cdate) DESC
         LIMIT 1 OFFSET 10
       ) IS NULL THEN 'Pass' ELSE 'FAIL' END AS Test
  FROM dt;


  -- Test  
  -- ------
  -- Pass  
  -- Pass 
 

Hope that helps.

10:23 Reply: Proposed JSON enhancements. (artifact: 1b9d27a10f user: cuz)

Not sure this suggestion is exactly in the spirit of Postel's law.

Citing my own law, I will say that I much prefer the return value for "This is invalid/malformed JSON" to be distinct from "The path you specified does not exist in this valid well-formed JSON".

I won't advocate for it in this case because there are other functions that can check the JSON validity, including in CHECK constraints, and in my projects (as I would advise to all) the SCHEMA would be tasked with ensuring JSON validity prior to trying any function upon said JSON values. This means that by the time a value is in the DB, it is already confirmed to be valid, and so any null returns can only mean that either the specified path is wrong, or it doesn't exist in the JSON. Ditto when acting upon the output of a prior JSON function.

I think that is just fine in this case.

00:26 Edit reply: System.Data.SQLite.Core SUM operation always returns as Int64 (artifact: 4040a49300 user: cuz)

SQLite itself has no say in that, but Joe Mistachkin who maintains the System.Data.SQLite wrapper is on here too and may have some advice.

You may also get good results asking on a C# forum.

That said, as a matter of professional curiosity, what does your C# return when you do:

var sqlCmd = new SQLiteCommand("SELECT SUM(Cost * Quantity) * 1.0 FROM Parts", connection);

00:26 Reply: System.Data.SQLite.Core SUM operation always returns as Int64 (artifact: fc54218b1e user: cuz)

SQLite itself has no say in that, but Joe Mistachkin whoe maintains the System.Data.SQLite wrapper is on here too and may have some advice.

You may also get good results asking on a C# forum.

That said, as a matter of professional curiosity, what does your C# return when you do:

var sqlCmd = new SQLiteCommand("SELECT SUM(Cost * Quantity) * 1.0 FROM Parts", connection);

2022-01-11
00:20 Reply: SQLite3 shell doing math operation on parameter substitution (artifact: 24ff42a75b user: cuz)

Nevermind the previous post, it's late here, that is obviously command-line arguments using the CLI.

My guess is the error is in the CLI code, not the SQLite engine code.

00:15 Reply: SQLite3 shell doing math operation on parameter substitution (artifact: 1493ae023c user: cuz)

That's seemingly a PERL/PHP (or whatever language that is) problem, or perhaps the wrapper being used, but not an SQLite problem.

In SQLite, using the C API, that "Parameter setting" would be bound using specialized API functions, such as "sqlite3_bind_text()" or "sqlite3_bind_int64()" etc. which cannot and will not be misconstrued.

I'm assuming PHP would try to determine the type of the parameter and then decide which bind-function to use, and in there it (PHP) actually misinterprets the value to be Integer and also parses it as an expression. SQLite wouldn't do that, even if it was passed using the wrong "bind" function (if that is even possible).

Perhaps a PHP forum might shed some light on it.

2022-01-08
21:52 Reply: recommendation for reclaiming the db when disk is full (artifact: 8254297ea9 user: cuz)

While "1. - Doing periodic backup to flash" is always a good idea, it won't prevent or circumvent a filled-up memory, but it will prepare a good backup in the case that (or any other loss) happens.

As to the other point, about monitoring that informs SQLite to "not further populate db if it is close to getting full", I don't see it having any reliable success rate. SQLite uses memory for all kinds of things, DB storage is only one of those things (in an in-memory DB), but all manner of DB management, even select queries, will use some or other amount of memory, and sometimes lots of it, up to double the size of the DB itself for certain operations.

There is no amount of monitoring and forewarning of SQLite that seems helpful to me. What is helpful is making sure there is always an amount of memory free that is equal or more than the size (or expected size) of the DB itself, over and above the memory used by the DB. As soon as that is no longer the case, warn some user or the DB admin or systems administrator.

Some systems eat as much memory as you allow them, such as MSSQL or MySQL and the like, do not run these on the same system, or if you do, ensure their memory usage is limited to a point where you know there will be enough left for double the SQLite DB's expected final size. Then, once such a size is decided, limit SQLite itself by setting the max page-count so it too knows to fail once growing beyond.

If you keep adding water into a bucket it WILL overflow, but you can decide at which point that is by deciding the size of bucket to use, and deciding what to do and who to warn when the high-water mark is reached in the bucket, before it starts spilling. There is no way to automate this without committing to disk, unless part of maintenance is deleting (losing) old data from the DB, and even then size reduction may need to be forced with a VACUUM - which is exactly an operation that may require twice the DB size in memory to complete.

I'm of the belief that if you are not using at least 80% of your available memory, then you've wasted your money buying it. But allocating 2GB for a 1GB database counts precisely as "using" it, even though it may not be occupied fully at all times. Running out of memory is much worse, especially if you start swapping to disk, or worse still, if there is no disk to swap to.

I suppose what I tried to say in far too many words is this: Please "Plan" your memory usage with enough margin and set everything up to respect those limits. This is much wiser than trying to measure/monitor the usage and try put reactive measures in place for when it overfills.

2022-01-07
16:35 Reply: Calculating time between two dates (artifact: 35ebfdd53e user: cuz)

There's a lot of ways to achieve this, mostly through some form of using the Julian-day calculation as pointed out by the previous posts, but you could as easily use Unix seconds strftime('%s',somedate) or whatever function returns a date/time value as a proportional scalar number.

Here are some few examples of different ways to do it:

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
  -- ================================================================================================
  -- First, just some setup of the demo data, included for completeness only
create table concerts (
  id          INT     not null,
  artist      TEXT    not null,
  concertDate NUMERIC not null -- PS: "NUMERIC" is better, but TEXT works too.
);

insert into concerts values
 (100, 'FOO FIGHTERS', '2019-03-22')
,(101, 'JOE COCKER',   '2007-03-22')
,(102, 'ELO',          '1998-04-18')
,(103, 'YES',          '2003-11-02')
;


SELECT * FROM concerts;

  --      id     |artist        |concertDate 
  -- ------------|--------------|------------
  --      100    |FOO FIGHTERS  |2019-03-22  
  --      101    |JOE COCKER    |2007-03-22  
  --      102    |ELO           |1998-04-18  
  --      103    |YES           |2003-11-02  


  -- Example 1 - Isolating the latest and previous items in sub queries 
  -- and correlated sub queries:
SELECT PD.latest, PD.prev
  FROM (
    SELECT LD.latest, (SELECT MAX(concertDate) FROM concerts WHERE concertDate < LD.latest) AS prev
      FROM (SELECT MAX(concertDate) AS latest FROM concerts) AS LD
  ) AS PD
 ;

  -- latest      |prev        
  -- ------------|------------
  -- 2019-03-22  |2007-03-22  


  -- Example 2: Doing the same, but translating it to time-spans:
SELECT PD.latest, PD.prev,
       (julianday(PD.latest)-julianday(PD.prev)) AS Δ_days,
       (julianday(PD.latest)-julianday(PD.prev))/30.437 AS Δ_months,
       (julianday(PD.latest)-julianday(PD.prev))/365.25 AS Δ_years
  FROM (
    SELECT LD.latest, (SELECT MAX(concertDate) FROM concerts WHERE concertDate < LD.latest) AS prev
      FROM (SELECT MAX(concertDate) AS latest FROM concerts) AS LD
  ) AS PD
;

  -- latest      |prev        |   Δ_days   |            Δ_months|   Δ_years  
  -- ------------|------------|------------|--------------------|------------
  -- 2019-03-22  |2007-03-22  |   4383.0   |    144.002365541939|    12.0    


  -- Example 3: Use a window function in case you are interested in ALL the
  -- time-spans between concerts, not just the latest:
SELECT id, artist, concertDate,
       MIN(concertDate) OVER prev AS previousConcert,
       julianday(concertDate) - julianday(MIN(concertDate) OVER prev) AS delta
  FROM concerts
WINDOW prev AS (ORDER BY concertDate ASC ROWS 1 PRECEDING)
;

  --      |              |            |previousCon-|        
  --   id |artist        |concertDate |cert        |  delta 
  -- -----|--------------|------------|------------|--------
  --  102 |ELO           |1998-04-18  |1998-04-18  |   0.0  
  --  103 |YES           |2003-11-02  |1998-04-18  | 2024.0 
  --  101 |JOE COCKER    |2007-03-22  |2003-11-02  | 1236.0 
  --  100 |FOO FIGHTERS  |2019-03-22  |2007-03-22  | 4383.0 

  -- ------------------------------------------------------------------------------------------------

2022-01-01
10:16 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: dc7d2faade user: cuz)

Ok, after some testing, I am relaxing as the ATTACH in my code absolutely always fail if the given DB does not exist, even attaching to a long-existing large DB.

I know this does not solve your problem, but at least it makes me sleep easy again. :)

Going to have to +1 Larry's suggestion that the behaviour seen might be a bug. If that is intended, I am not finding any documentation or literature anywhere in support.

10:04 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: 0a0471746b user: cuz)

Apologies, that was my knee-jerk quick reply having only half-read the question and skimming the code. I mean, it might work, but I am not sure the open mode of the main DB transfers to any ATTACHed DB - you'll need to test that.

Even if that works, it's only half a solution because you might want that for attaches, but not for the main DB.

I'm sure Larry's observation might solve it for you, but I'm a bit surprised as I did not expect that to work. I'm still thinking there's probably reason enough to make it so you can specify whether you want an ATTACH DB to create the DB or fail if it doesn't exist. I much prefer it to fail (in line with my misguided belief until now).

09:41 Edit reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: b094c1b0a3 user: cuz)

You should include both the flags: SQLITE_OPEN_READWRITE and SQLITE_OPEN_CREATE.

09:39 Reply: My code's ATTACH will not create a new db but sqlite3's shell will? (artifact: fad6186b00 user: cuz)
You should include both the flags: SQLITE_OPEN_READWRITE SQLITE_OPEN_CREATE
2021-12-31
00:16 Reply: strings separated by ";" convert to columns (artifact: f79ee7d366 user: cuz)

Here's one that's maybe simpler and with different versions (read the whole thread) that splits on spaces (but you can change those to commas or semicolons easily):

forum post 840c98a8e87c2207?t

It does split it into rows though. There is no way in any SQL engine (SQLite or otherwise) to split text into output columns, as you seem to be requesting. All output columns, for any query, have to be known at the time of parsing.

Hope that helps.

2021-12-24
12:04 Reply: Selecting certain sub string (artifact: 7d56d012ba user: cuz)

Something fun to do - thanks.

Here is one solution (using many of the tricks already expressed by others) and some Common Table expressions to weed out the rest.

Assumptions:

  • There is always a full path given
  • The filename always contain both the year-month and the file number
  • The year-month is always 6 digits long
  • The file-number is always 6 digits long

Everything else should be dynamically handled.

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
  -- ================================================================================================

  -- Create an example table containing example paths...

CREATE TABLE paths(pathname TEXT);

INSERT INTO paths VALUES
 ('D:\DOSYALAR\2021-07\0012090829\0012090829-202107-K-000000.zip')
,('D:\DOSYALAR\2021-07\0012090829\GIB-0012090829-202107-KB-000000.zip')
,('D:\DOSYALAR\2021-08\16270558554\GIB-16270558554-202108-YB-000000.zip')
,('D:\DOSYALAR\2021-08\35345023780\35345023780-202108-K-000000.zip')
,('D:\DOSYALAR\2021-08\9920362895\GIB-9920362895-202108-YB-000000.zip')
,('D:\DOSYALAR\2021-08\3310378950\3310378950-202108-K-000001.zip')
;


-- Show how the path-length identifier works. It keeps looking for the
-- next slash in the name, skips to it and cuts off the handled part,
-- using the row-id to identify the rows. (It could use the primary key
-- or unique index too, but my example did not have any):

WITH pathpos(path_id, slash_idx, remaining_path) AS (
    SELECT rowid, 0, pathname FROM paths
    UNION ALL
    SELECT path_id, slash_idx + instr(remaining_path,'\'), substr(remaining_path, instr(remaining_path, '\') + 1)
      FROM pathpos
     WHERE instr(remaining_path, '\') > 0  LIMIT 1000
), pathlen(path_id, last_slash_idx, remaining_path) AS (
    SELECT path_id, MAX(slash_idx), remaining_path
      FROM pathpos
     GROUP BY path_id
)
SELECT path_id, last_slash_idx AS directory_length, remaining_path AS filename
  FROM pathlen
;

  --             |director-|                                      
  --    path_id  | y_length|filename                              
  -- ------------|---------|--------------------------------------
  --       1     |    31   |0012090829-202107-K-000000.zip        
  --       2     |    31   |GIB-0012090829-202107-KB-000000.zip   
  --       3     |    32   |GIB-16270558554-202108-YB-000000.zip  
  --       4     |    32   |35345023780-202108-K-000000.zip       
  --       5     |    31   |GIB-9920362895-202108-YB-000000.zip   
  --       6     |    31   |3310378950-202108-K-000001.zip        




-- The next query shows using the above and more string searches and
-- sub-string handlers to get the rest of the information.

WITH pathpos(path_id, slash_idx, remaining_path) AS (
    SELECT rowid, 0, pathname FROM paths
    UNION ALL
    SELECT path_id, slash_idx + instr(remaining_path,'\'), substr(remaining_path, instr(remaining_path, '\') + 1)
      FROM pathpos
     WHERE instr(remaining_path, '\') > 0  LIMIT 1000
), pathlen(path_id, last_slash_idx, remaining_path) AS (
    SELECT path_id, MAX(slash_idx), remaining_path
      FROM pathpos
     GROUP BY path_id
), ym_start(path_id, yearmonth_idx, yearmonth_part) AS (
    SELECT path_id,
           instr(substr(remaining_path,5), '-') + 1,
	   substr(remaining_path, 5)
      FROM pathlen
), ym_end(path_id, yearmonth, fileno_idx, yearmonth_part) AS (
    SELECT path_id,
           substr(yearmonth_part, yearmonth_idx, 6),
           instr(substr(yearmonth_part, yearmonth_idx + 7), '-') + 1,
           substr(yearmonth_part, yearmonth_idx + 7)
      FROM ym_start
), all_info(path_id, directory, filename, type, yearmonth, fileno) AS (
    SELECT pathlen.path_id,
           substr(paths.pathname, 1, pathlen.last_slash_idx),
	   pathlen.remaining_path,
	   substr(ym_end.yearmonth_part, 1, ym_end.fileno_idx - 2),
	   ym_end.yearmonth,
	   substr(ym_end.yearmonth_part, ym_end.fileno_idx, 6)
      FROM pathlen
      JOIN ym_end ON ym_end.path_id = pathlen.path_id
      JOIN paths  ON paths.rowid    = pathlen.path_id
)
SELECT *
  FROM all_info
;


  --    path_id  |directory                         |filename                              | type|yearmonth| fileno 
  -- ------------|----------------------------------|--------------------------------------|-----|---------|--------
  --       1     |D:\DOSYALAR\2021-07\0012090829\   |0012090829-202107-K-000000.zip        |  K  |  202107 | 000000 
  --       2     |D:\DOSYALAR\2021-07\0012090829\   |GIB-0012090829-202107-KB-000000.zip   |  KB |  202107 | 000000 
  --       3     |D:\DOSYALAR\2021-08\16270558554\  |GIB-16270558554-202108-YB-000000.zip  |  YB |  202108 | 000000 
  --       4     |D:\DOSYALAR\2021-08\35345023780\  |35345023780-202108-K-000000.zip       |  K  |  202108 | 000000 
  --       5     |D:\DOSYALAR\2021-08\9920362895\   |GIB-9920362895-202108-YB-000000.zip   |  YB |  202108 | 000000 
  --       6     |D:\DOSYALAR\2021-08\3310378950\   |3310378950-202108-K-000001.zip        |  K  |  202108 | 000001 

  -- ------------------------------------------------------------------------------------------------

Please note: On a rather large dataset, this query will get the job done, but it will not be fast. For speed, nothing will beat the UDF suggestion made in another post.

Good luck!

2021-12-23
23:17 Reply: Select entries based on date/time plus one before and one after (artifact: a880647268 user: cuz)

This one is probably fractionally slower, but it does not require the row ids to be contiguous, so even if there are deletions from the logtable, this should still work correctly:

  -- ================================================================================================

CREATE TABLE logtable(
  logtime NUMERIC NOT NULL UNIQUE,
  logdata TEXT
);

INSERT INTO logtable(logtime, logdata) VALUES
 ('2021-12-09 15:00:00', 'This log is earlier on the previous day - Should be ignored')
,('2021-12-09 22:00:00', 'This log is later on the previous day - Should be ignored')
,('2021-12-09 23:00:00', 'This log is the last log on the previous day, report should start here')
,('2021-12-10 01:00:00', 'This log is the earliest on the report day - Include it')
,('2021-12-10 12:00:00', 'This log is the middle on the report day - Include it')
,('2021-12-10 22:00:00', 'This log is the latest on the report day - Include it')
,('2021-12-11 01:00:00', 'This log is the earliest on the day after, the report should end here')
,('2021-12-11 05:00:00', 'This log is another log on the day after - Should be ignored')
;


WITH rangemin(timeID) AS (
    SELECT MAX(rowid) FROM logtable WHERE logtime <  '2021-12-10 00:00:00'
),   rangemax(timeID) AS (
    SELECT MIN(rowid) FROM logtable WHERE logtime >= '2021-12-11 00:00:00'
)
SELECT logtable.*
  FROM logtable, rangemin, rangemax
 WHERE logtable.rowid BETWEEN rangemin.timeID AND rangemax.timeID
;

  -- logtime              |logdata                                                                 
  -- ---------------------|------------------------------------------------------------------------
  -- 2021-12-09 23:00:00  |This log is the last log on the previous day, report should start here  
  -- 2021-12-10 01:00:00  |This log is the earliest on the report day - Include it                 
  -- 2021-12-10 12:00:00  |This log is the middle on the report day - Include it                   
  -- 2021-12-10 22:00:00  |This log is the latest on the report day - Include it                   
  -- 2021-12-11 01:00:00  |This log is the earliest on the day after, the report should end here   

  -- ------------------------------------------------------------------------------------------------
23:09 Reply: Select entries based on date/time plus one before and one after (artifact: 6e9cf2e735 user: cuz)

This assumes that the row-ids are contiguous (i.e. no deletes ever happen), and that we are looking for the all the logs on 10 December 2021 (plus of course the last log before and the first log after that):

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
================================================================================================

CREATE TABLE logtable(
  logtime NUMERIC NOT NULL UNIQUE,
  logdata TEXT
);

INSERT INTO logtable(logtime, logdata) VALUES
 ('2021-12-09 15:00:00', 'This log is earlier on the previous day - Should be ignored')
,('2021-12-09 22:00:00', 'This log is later on the previous day - Should be ignored')
,('2021-12-09 23:00:00', 'This log is the last log on the previous day, report should start here')
,('2021-12-10 01:00:00', 'This log is the earliest on the report day - Include it')
,('2021-12-10 12:00:00', 'This log is the middle on the report day - Include it')
,('2021-12-10 22:00:00', 'This log is the latest on the report day - Include it')
,('2021-12-11 01:00:00', 'This log is the earliest on the day after, the report should end here')
,('2021-12-11 05:00:00', 'This log is another log on the day after - Should be ignored')
;



WITH rowrange(minTimeID, maxTimeID) AS (
    SELECT MIN(rowid)-1, MAX(rowid)+1
      FROM logtable
     WHERE logtime >= '2021-12-10 00:00:00' AND logtime < '2021-12-11 00:00:00'
)
SELECT *
  FROM logtable, rowrange
 WHERE logtable.rowid BETWEEN rowrange.minTimeID AND rowrange.maxTimeID
;

  -- logtime              |logdata                                                                 
  -- ---------------------|------------------------------------------------------------------------
  -- 2021-12-09 23:00:00  |This log is the last log on the previous day, report should start here  
  -- 2021-12-10 01:00:00  |This log is the earliest on the report day - Include it                 
  -- 2021-12-10 12:00:00  |This log is the middle on the report day - Include it                   
  -- 2021-12-10 22:00:00  |This log is the latest on the report day - Include it                   
  -- 2021-12-11 01:00:00  |This log is the earliest on the day after, the report should end here   

  -- ------------------------------------------------------------------------------------------------


2021-12-13
14:08 Reply: Importing XML (artifact: e803944497 user: cuz)

Indeed, I agree with little-brother and Simon, with the small added mention that it seems to me your export is wrapped in XML, which isn't itself the standard way to do so, and so no real mainstream single-unroll method for all of it exists.

You have one standard (vCard) rolled into another standard (XML), so best is to use a two-pronged unroll the one (very standard xml) and then the second (very standard vCard) using any of the plentiful tools that exist for each.

2021-12-11
15:21 Reply: Is an Installtion reqired on IIS ? (artifact: be25b218c1 user: cuz)

The unfortunate answer is: Maybe, probably.

We do not however know the specific answer, since this is the SQLite forum but SQLite is just an engine, how it is used in different systems is up to that system.

There is always a chance someone here may have used it in the same way and can assist, but you would probably get more/better help from a Plesk/IIS forum.

15:16 Reply: UPSERT but know what is inserted and what is updated (artifact: b0ff6e8636 user: cuz)

There are Two ways of achieving that - using IFNULL(A, B) which retrurns A, unless it is NULL, in which case it returns B, or, COALESCE(A, B, C, ...) which returns the first non-NULL value (A or B or C or ... etc.) from the left.

See here both demonstrated in your query:

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
================================================================================================

drop table if exists new_test;


CREATE TABLE new_test ( 
	id INTEGER NOT NULL,
	fld_text VARCHAR(20) NOT NULL,
	fld_int1 integer,
	fld_int2 integer,
	PRIMARY KEY (id) 
);

create unique index new_test_idx on
new_test (fld_text);

insert into new_test 
(fld_text,	fld_int1, fld_int2)
values 
('A',1, 1), 
('B',2, 2), 
('C',3, 2);

select * from new_test;


  --      id     |fld_text|  fld_int1  |fld_int2
  -- ------------|--------|------------|--------
  --       1     |    A   |      1     |    1   
  --       2     |    B   |      2     |    2   
  --       3     |    C   |      3     |    2   

INSERT INTO new_test
(fld_text, fld_int1, fld_int2)
VALUES
('B',null, 20),
('C',300, null),
('D',4, 4)
ON CONFLICT(fld_text) DO UPDATE SET
	fld_int1=COALESCE(excluded.fld_int1,fld_int1),
	fld_int2=COALESCE(excluded.fld_int2,fld_int2)
WHERE 	(excluded.fld_int1 is not null and fld_int1<>excluded.fld_int1)
or 		(fld_int2<>excluded.fld_int2 is not null and fld_int2<>excluded.fld_int2)
;

select * from new_test;


  --      id     |fld_text|fld_int1|fld_int2
  -- ------------|--------|--------|--------
  --       1     |    A   |    1   |    1   
  --       2     |    B   |    2   |   20   
  --       3     |    C   |   300  |    2   
  --       4     |    D   |    4   |    4   

INSERT INTO new_test
(fld_text, fld_int1, fld_int2)
VALUES
('B',null, 21),
('C',301, null),
('D',5, 5)
ON CONFLICT(fld_text) DO UPDATE SET
	fld_int1=IFNULL(excluded.fld_int1,fld_int1),
	fld_int2=IFNULL(excluded.fld_int2,fld_int2)
WHERE 	(excluded.fld_int1 is not null and fld_int1<>excluded.fld_int1)
or 		(fld_int2<>excluded.fld_int2 is not null and fld_int2<>excluded.fld_int2)
;

select * from new_test;


  --      id     |fld_text|fld_int1|fld_int2
  -- ------------|--------|--------|--------
  --       1     |    A   |    1   |    1   
  --       2     |    B   |    2   |   21   
  --       3     |    C   |   301  |    2   
  --       4     |    D   |    5   |    5   

2021-12-10
14:19 Reply: Sqlite3 can happend deadlock? (artifact: 8ce61e0739 user: cuz)

... so people wouldn't try to be a psycho to guess what is wrong with your code.

I too feel that reading other people's code too much, really saps one's empathy. :)

2021-12-07
18:44 Reply: SEE release history (artifact: 74a82c9d54 user: cuz)

It is relatively well-known here. I've not used it outside of test-benches since most of our projects use SQLite for rapid storage and caches for data that is either already-encrypted, or simply unimportant, but from my tests and what I've seen said about it, seems quite good.

The real luxury of SEE itself lies of course in the fact that it is made by the SQLite Devs, it evolves with SQLite, promises longevity, is feature-intertwined with, and enjoys the same full testing as SQLite itself - and, most importantly, if your project has any bit of real-world size/importance, it comes with full support from people who know both SQLite and SEE inside-out. That's worth paying for.

It also sports a forum, just like this, where you can get help from others even, albeit a slightly smaller number of patrons.

Otherwise, if you just want something simple that works and do not need any serious guarantees, SQLCipher will do you just fine.

13:25 Reply: SEE release history (artifact: cef114448c user: cuz)

It seems like an impossible thing to google.

Really?

It's the SQLite Encryption Extension, a commercial addition to SQLite maintained by the SQLite developers that makes any SQLite DB work as-is but with all data safely encrypted.

My very first google attempt for https://www.google.com/search?q=sqlite+SEE hits these at the top:

www.sqlite.org/see/doc/release/www/index.wiki

and

sqlite.org/purchase/see

Are you Googling from China perhaps?

00:41 Reply: Venting about inconvenient limitations, feel free to ignore (artifact: 431fe72b25 user: cuz)

...except that the temp table in question contains a list of pragmas to be applied to the main database, one by one.

I believe that is a most functional example of what people mean when they say "You can't have your cake and eat it". :)

2021-12-02
17:42 Reply: Linking tables (artifact: 60881ff48b user: cuz)

Now that I have insert values and on looking closer, it's clear you are having circular references.

Not sure how the class on foreign keys went in your establishment, but it's a one-way street. if table X has a foreign-key reference to table Y, then table Y cannot ALSO have a foreign Key reference to table X. Else a race condition of sorts is created. In other words, if I want to insert something into table X, it means a corresponding Key value must already exist in Table Y, and I cannot insert it in table Y first because its foreign Key constraint requires the value to already exist in table X, which is circular.

Now you can work around this in two ways, like disabling foreign key checking before inserting, or you can insert NULL values for the columns first and then later update them once values exist in the other table. Any of these workaround however fully negates the entire point of foreign keys and will make your code horribly botched.

Pick the table which is the boss of the values, then make a reference to it from the table which references/is interested in/consumes the values, and not the other way around. If I remove all the FK statements from your data tables while leaving all of them in the consuming main table, then it all works fine.

See here:

  -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
  -- ================================================================================================

PRAGMA foreign_keys = 1;

CREATE TABLE IF NOT EXISTS einfAnforderung(
	EinfAnfId INTEGER PRIMARY KEY,
	Leistungen TEXT,
	Unterstuetzung TEXT,
	Durchfuehrung TEXT,
	Doku TEXT,
	LastId INTEGER
);

CREATE TABLE IF NOT EXISTS sysAnforderung(
	sysAnfordId INTEGER PRIMARY KEY,
	ZielgruppEx TEXT,
	ZielgruppIn TEXT,
	Hardware TEXT,
	Funktionen TEXT,
	Daten TEXT,
	Ausgaben TEXT,
	Schnittstellen TEXT,
	Nutzeranford TEXT,
	Systemanford TEXT,
	Weitereanford TEXT,
	LastId INTEGER
);

CREATE TABLE IF NOT EXISTS istZustand(
	IstId INTEGER PRIMARY KEY,
	Unetz TEXT,
	Uhard TEXT,
	Ubs TEXT,
	Uanwend TEXT,
	Uschnitt TEXT,
	Uunterschied TEXT,
	Ugp TEXT,
	Uprozessd TEXT,
	LastId INTEGER
);

CREATE TABLE IF NOT EXISTS ziel(
	ZielId INTEGER PRIMARY KEY,
	Anlass TEXT,
	Beschreibung TEXT,
	Ziele TEXT,
	Start DATE,
	Ende DATE,
	Sonstiges TEXT,
	LastId INTEGER
);

CREATE TABLE IF NOT EXISTS unternehmen (
	UId INTEGER AUTO_INCREMENT PRIMARY KEY,
	Uname TEXT,
	Uadresse TEXT,
	TochterU TEXT,
	Utelefonnr TEXT,
	Ufax TEXT,
	Uweb TEXT,
	Ugroesse TEXT,
	Uprognose TEXT,
	Ugruendung TEXT,
	Ubranche TEXT,
	UprodBeschr TEXT,
	Uanlage BLOB,
	Usonstiges TEXT,
	LastId INTEGER
);

CREATE TABLE IF NOT EXISTS lastenheft (
	LastId INTEGER AUTO_INCREMENT PRIMARY KEY,
	LastName TEXT,
	UId INTEGER,
	ZielId INTEGER,
	IstId Integer,
	sysAnfordId INTEGER,
	EinfAnfId INTEGER,
	FOREIGN KEY(UId) REFERENCES unternehmen(UId),
	FOREIGN KEY(UId) REFERENCES ziel(ZielId),
	FOREIGN KEY(UId) REFERENCES istZustand(IstId),
	FOREIGN KEY(UId) REFERENCES sysAnforderung(sysAnfordId),
        FOREIGN KEY(UId) REFERENCES einfAnforderung(EinfAnfId)
);

INSERT INTO unternehmen VALUES(NULL, 2, 'hello', 'j', 'j', 'k', 'j', 'k', 'm', 'j', 'kd', '2', 'j', 'j', 'j');


SELECT * FROM unternehmen;

  --       |            |        |        |Utelefon-|     |     |        |         |Ugruendu-|        |            |       |Usonstig-|      
  -- UId   |    Uname   |Uadresse|TochterU|    nr   | Ufax| Uweb|Ugroesse|Uprognose|    ng   |Ubranche| UprodBeschr|Uanlage|    es   |LastId
  -- ------|------------|--------|--------|---------|-----|-----|--------|---------|---------|--------|------------|-------|---------|------
  -- NULL  |      2     |hello   |    j   |    j    |  k  |  j  |    k   |    m    |    j    |   kd   |      2     |   j   |    j    |   j  



  -- ------------------------------------------------------------------------------------------------

  -- 2021-12-02 19:36:24.084  |  [Success]    Script Success.

Also, I don't know if your programming language cares about it, but you have dangling commas at the end of your call and not enough values for to match the insert prototype. You have to check that very well.

Perhaps "try" the SQL outside of the code first making sure it works and then try to adapt it to code. The other way madness lies.

Let us know if there are any other questions.

16:33 Reply: Linking tables (artifact: afaea8cb7d user: cuz)

This looks all fine in principle.

Your table definition with he FK definitions need to be the last statement called though, you cannot have that succeed when the referenced tables do not exist yet.

AS to the problem of the original question, now we know the schema, can you now show an example of a set of values you would insert and what do you expect after that insert would be in the different tables vs. what you actually see in the tables? (Just two or three inserts will do).

16:03 Reply: In perspective of disk usage, is the sizeof 'a' < INT ? (artifact: ca8474171b user: cuz)

It is always worth using foreign keys for columns where the values are not unique. It almost always saves space if using with an integer foreign key, but there is more to foreign keys.

  • A Foreign Key is a constraint, it has functionality and can facilitate auto updating, auto deletion and the like.
  • A Foreign Key provides a way to dereference a value, and if that value needs to change (such as you want to use 'Video' in stead of 'video'), then you need to change only 1 item in 1 table.
  • A Foreign Key provides data integrity, avoiding such things as misspelling some entries or finger errors (since only listed values are accepted, or if not constrained, you can still easily see and rectify the wrong ones).

Having a smaller space-saving footprint is just the cherry on top.

15:55 Reply: Linking tables (artifact: 1743aa7248 user: cuz)

All we can tell from your description above is: Yep, you probably have a problem.

If you want more help than that, please show the exact schema SQL for making the tables. That should already make the problem clear, but just in case it is more obscure, posting the data (or some of it) for each table in the form of SQL INSERT statements will also help.

12:48 Edit reply: Howto group most similar values (artifact: f454a5993d user: cuz)

Nope, that works well - other than adding the Distinct and not showing the initial ID and Value, which would prevent the line with id 4 and id 7 (which are essentially duplicate) to both be shown.

12:47 Reply: Howto group most similar values (artifact: 216fe84a94 user: cuz)

Nope, that works well - other than adding the Distinct and not showing the initial ID, which would prevent the line with id 4 and id 7 (which are essentially duplicate) to both be shown.

11:06 Reply: Howto group most similar values (artifact: 3ec5cc868e user: cuz)

If you need to internalize this and the CTE is in you way, here it is without the CTE and the extra fields, using a sub-query:

SELECT DISTINCT values_in_group, values_average
   FROM (
   	SELECT COUNT(*) AS values_in_group, AVG(B.Value) AS values_average
	  FROM Sampled AS A
	  JOIN Sampled AS B ON (ABS(B.Value-A.Value)/NULLIF(A.Value,0)) < 0.05
	 GROUP BY A.ID
	) AS avgs
  ORDER BY values_in_group DESC
;


  -- values_- |                    
  -- in_group |      values_average
  -- ---------|--------------------
  --     3    |    20.2333333333333
  --     1    |                25.1
  --     1    |               100.2

10:52 Edit reply: Howto group most similar values (artifact: 32190ab06f user: cuz)

I think Gunter's point was more about questioning the mathematical correctness of your request than your solution. To make his point clear, see my solution again here but with a lot more "similar" values, is this really what you expect to see?:

 -- SQLite version 3.35.4  [ Release: 2021-04-02 ]  on  SQLitespeed version 2.1.3.11.
================================================================================================

CREATE TABLE Sampled(ID INTEGER PRIMARY KEY AUTOINCREMENT, Value REAL);

INSERT INTO Sampled(ID, Value) VALUES
 ( 1,  20.1)
,( 2,  20.4)
,( 3,  25.1)
,( 4, 100.2)
,( 5,  20.2)
,( 6,   0.0)
,( 7,  20.0)
,( 8,  20.1)
,( 9,  20.2)
,(10,  20.3)
,(11,  20.4)
,(12,  20.5)
,(13,  20.6)
,(14,  20.7)
,(15,  20.8)
,(16,  20.9)
,(17,  21.0)
,(18,  21.1)
,(19,  21.2)
,(20,  21.3)
,(21,  21.4)
,(22,  21.5)
,(23,  21.6)
;

WITH avgs(values_in_group, values_average, values_min, values_max) AS (
	SELECT COUNT(*), AVG(B.Value), MIN(B.Value), MAX(B.Value)
	  FROM Sampled AS A
	  JOIN Sampled AS B ON (ABS(B.Value-A.Value)/NULLIF(A.Value,0)) < 0.05
	 GROUP BY A.ID
 )
 SELECT DISTINCT values_in_group, values_average, values_min, values_max
   FROM avgs
  ORDER BY values_in_group DESC
;


  -- values_- |                    |values_- |values_- 
  -- in_group |      values_average|      min|      max
  -- ---------|--------------------|---------|---------
  --     20   |              20.715|     20.0|     21.6
  --     19   |    20.6684210526316|     20.0|     21.5
  --     19   |    20.7526315789474|     20.1|     21.6
  --     18   |    20.6222222222222|     20.0|     21.4
  --     17   |    20.5764705882353|     20.0|     21.3
  --     17   |    20.8294117647059|     20.2|     21.6
  --     16   |            20.53125|     20.0|     21.2
  --     15   |    20.4866666666667|     20.0|     21.1
  --     15   |    20.9133333333333|     20.3|     21.6
  --     14   |    20.9571428571429|     20.4|     21.6
  --     13   |                20.4|     20.0|     20.9
  --     12   |               21.05|     20.5|     21.6
  --     11   |                21.1|     20.6|     21.6
  --     1    |                25.1|     25.1|     25.1
  --     1    |               100.2|    100.2|    100.2

Some values are similar (within 5%) to values above them and below them, but not to values more than 5% away, so a lot of small groupings then form containing values from multiple sets of "close" values.

A better way to do this is to predefine the categories (say 1, 1.5, 2, 2.5 etc...) and then see how many values are "close" to the predefined categories. Say if you want that, we can show how to do that too.

More ↓