SQLite Forum

Timeline
Login

50 forum posts by user jicman occurring on or before 2021-03-16 18:30:15.

More ↑
2021-03-16
18:30 Post: Missing records from the IN list (artifact: d1d89ca85f user: jicman)

Greetings.

The following query is missing 3 records. I know that it's probably because the LEFT JOIN does not have those records, but I have added a COALESCE to arrange for those missing records. Any ideas how to get this to work?

SELECT a.ProjID, COALESCE(sum(b.AnnualDossier), 0.00) AS Dossier FROM Project_List AS a 
LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID
AND
(
   b.InsertDate = 
   (
       SELECT MAX(InsertDate) FROM ABT_Budget WHERE b.ProjID = ProjID
   )
   OR
       b.InsertDate IS Null
)
WHERE a.ProjID IN 
(
   SELECT ProjID FROM Project_List WHERE 1=1 
  AND
  (
     a.ProjID IN ('PR0000019149','PR0000018256','PR0000018262','PR0000019185','PR0000019187')
  )  
  AND InsertDate = 
  ( 
     SELECT MAX(InsertDate) FROM Project_List WHERE 1=1 
     AND 
     (
        a.ProjID IN ('PR0000019149','PR0000018256','PR0000018262','PR0000019185','PR0000019187')
     )   
  ) 
)
AND a.InsertDate = 
      (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID = a.ProjID) 
  GROUP BY a.ProjID;

PR0000018256|0.0
PR0000018262|0.0
sqlite>

Thanks for your support.

josé

2021-02-19
20:50 Reply: Help understanding LEFT JOIN (artifact: c9a32c9837 user: jicman)

Keith, in my country we use the term, "tu eres un toro!", which translates to "you are a bull!". This is in all sense of the statement an exaltation to the person that it's being referred to, and in this case, to your knowledge. So, in other words, you're amazing. :-) Thanks.

19:27 Post: Help understanding LEFT JOIN (artifact: 518e55ac85 user: jicman)

Apologies for the long post, but the scenario is a bit complicated (to me :-)), I had to create these tables to show the problem I am having on a system I created.

What I am trying to do is to get a series of LEFT JOINs to bring me the correct response. Please imagine the following:

create table zMList (id, pid, a, yyyy, c, d, idate);
insert into zMList values (1,'p001', 10, 2019, 'n', 4, '2019-02-11');
insert into zMList values (2,'p002', 25, 2019, 'n', 4, '2019-02-11');
insert into zMList values (3,'', 32, 2019, 'n', 4, '2019-02-11');
insert into zMList values (4,'p004', 64, 2019, 'y', 4, '2019-02-11');
insert into zMList values (5,'p005', 35, 2019, 'y', 4, '2019-02-11');
insert into zMList values (1,'p001', 10, 2020, 'n', 4, '2019-02-12');
insert into zMList values (2,'p002', 2, 2019, 'n', 4, '2019-02-12');
insert into zMList values (3,'', 13, 2019, 'y', 4, '2019-02-12');
insert into zMList values (4,'p004', 44, 2019, 'y', 4, '2019-02-12');
insert into zMList values (1,'p001', 10, 2020, 'n', 4, '2019-02-13');
insert into zMList values (2,'p002', 82, 2019, 'n', 4, '2019-02-13');
insert into zMList values (3,'', 93, 2020, 'y', 4, '2019-02-13');
insert into zMList values (4,'p004', 45, 2020, 'n', 4, '2019-02-13');
insert into zMList values (5,'p005',75, 2020, 'y', 8, '2019-02-13');

create table zProjs (id, pid, g, h, i, j, idate);
insert into zProjs values (1,’p001', 1, 4, 'n', 4, '2019-02-11');
insert into zProjs values (2,’p002', 2, 3, 'n', 4, '2019-02-11');
insert into zProjs values (4,’p004', 4, 5, 'y', 4, '2019-02-11');
insert into zProjs values (5,’p005', 5, 3, 'y', 4, '2019-02-11');

create table zImport (id, nn, yyyy, c, d, idate);
insert into zImport values (1, 1, 2019, 'n', 4, '2019-02-11');
insert into zImport values (2, 7, 2019, 'n', 4, '2019-02-11');
insert into zImport values (4, 4, 2019, 'y', 4, '2019-02-11');
insert into zImport values (5, 5, 2019, 'y', 4, '2019-02-11');
insert into zImport values (1, 10, 2020, 'n', 4, '2019-02-12');
insert into zImport values (2, 2, 2019, 'n', 4, '2019-02-12');
insert into zImport values (4, 4, 2019, 'y', 4, '2019-02-12');
insert into zImport values (1, 10, 2020, 'n', 4, '2019-02-13');
insert into zImport values (2, 6, 2019, 'n', 4, '2019-02-13');
insert into zImport values (4, 9, 2020, 'n', 4, '2019-02-13');
insert into zImport values (5, 8, 2020, 'y', 8, '2019-02-13');

create table zdocs (id, dn, link, idate);
insert into zdocs values (1,'p001.xls', 'http://xls.com/p001.xls', '2019-02-11');
insert into zdocs values (1,'p001-a.xls', 'http://xls.com/p001a.xls', '2019-02-12');
insert into zdocs values (1,'p001-b.xls', 'http://xls.com/p001b.xls', '2019-02-13');
insert into zdocs values (4,'p004a.xls', 'http://xls.com/p003a.xls', '2019-02-22');
insert into zdocs values (4,'p004b.xls', 'http://xls.com/p003b.xls', '2019-02-23');
insert into zdocs values (5,'p005.xls', 'http://xls.com/p005.xls', '2019-02-11');

create table t2 (pid, WYear, co, amt, indate);
insert into t2 values ('p001', 2019,'aa', 100.0, '2019-02-13');
insert into t2 values ('p001', 2019,'ab', 100.0, '2019-02-13');
insert into t2 values ('p001', 2019,'ac', 100.0, '2019-02-13');
insert into t2 values ('p004', 2019,'d', 100.0, '2019-02-13');
insert into t2 values ('p002', 2020,'c', 100.0, '2019-02-13');
insert into t2 values ('p005', 2020,'a', 100.0, '2019-02-13');
insert into t2 values ('p005', 2020,'a', 100.0, '2019-02-13');
insert into t2 values ('p001', 2020,'aa', 100.0, '2019-02-14');
insert into t2 values ('p001', 2020,'ab', 100.0, '2019-02-14');
insert into t2 values ('p001', 2020,'ac', 100.0, '2019-02-14');

When I run this SELECT,

SELECT a.id, a.pid, a.yyyy, b.i, c.nn, d.dn, sum(e.amt) FROM zMList a LEFT JOIN zProjs b ON a.id = b.id LEFT JOIN zImport c ON a.id = c.id LEFT JOIN zdocs d ON a.id = d.id LEFT JOIN t2 e ON a.pid = e.pid WHERE a.yyyy = 2020 AND a.idate = (SELECT MAX(idate) from zMList where id = a.id) AND c.idate = (SELECT MAX(idate) from zImport where id = c.id) AND d.idate = (SELECT MAX(idate) from zdocs where id = d.id) AND e.indate = (SELECT MAX(indate) from t2 where pid = e.pid) GROUP BY a.pid ;

I get,

1|p001|2020|n|10|p001-b.xls|300.0 4|p004|2020|y|9|p004b.xls|100.0 5|p005|2020|y|8|p005.xls|200.0

and I am expecting,

1|p001|2020|n|10|p001-b.xls|300.0 3||2020|||| 4|p004|2020|y|9|p004b.xls|100.0 5|p005|2020|y|8|p005.xls|200.0

I have tried a bunch of other SELECTs to try to get it to show, but, just the first list with 3 records are coming out. I thought that if the first table listed on the FROM piece of the statement contained all the records that I want to display, and LEFT JOIN'ed it with other tables, all the records that are part of the first table will be displayed with empty values for the rest of the tables. Any thoughts? Thanks.

josé

2021-01-05
16:06 Reply: Query's first run, slow... (artifact: 90ae78e1f5 user: jicman)

Thanks.

14:32 Post: Query's first run, slow... (artifact: fc1209eb59 user: jicman)

Greetings!

When I run a query, most times, the first time I run it, it runs slow. But, after running the same query again, it's really fast. I know that perhaps this has been said before here, and I may have even read it, but I can't remember why. For example,

SELECT ProjID, CID, Project_Name, Start_date,
       Finish_Date, Target_Go_Live_Date, 
       BL_Start,BL_Finish, Ann_CapexP, Tot_CapexP, 
       Progress, Status, Project_Manager       
       FROM Project_List a WHERE 
  a.PMO_Board_Report != 'No' 
AND
(
   (
     a.Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15'
   )
   OR
   (
     a.Target_Go_Live_Date = '' AND a.Finish_Date >  substr(date('now'),1,4) || '-01-15'
   )
)
AND
  a.InsertDate = 
  (
    SELECT MAX(InsertDate) FROM Project_List WHERE ProjID = a.ProjID
  )
group BY a.ProjID
;

timer result is:
Run Time: real 4.195 user 0.125000 sys 0.750000

If I run this right back, this is the timer result:
Run Time: real 0.834 user 0.093750 sys 0.218750

and again,
Run Time: real 0.764 user 0.140625 sys 0.156250

once more,
Run Time: real 0.721 user 0.125000 sys 0.203125

Why is that again? Thanks.

2020-12-18
15:26 Reply: 'ON DELETE CASCADE' seems to not be working (artifact: 659ed3100d user: jicman)

is a doubly redundant redundancy...

man, that is funny. A classic Keith.

2020-12-17
15:03 Reply: Thanks for SQLite version 3.34.0 (artifact: 5f8bc8b10a user: jicman)

I don't understand why 3.34.0 is so much faster for you. There were improvements, of course, but by our measurements, 3.34.0 is only a small percentage faster than 3.33.0. Nevertheless, I'm glad it is working out well for you.

So, you're right. The same weekend that I upgraded my DLL from 3.33.0 to 3.34.0, there was an upgrade on the server where the Shared SQLite DB exists, and somehow, this has made the difference in speed. I tried 10 times each version (3.33.0 and 3.34.0) and they were all pretty close. So, although I am very happy with the new version, the speed noticed was caused by a change in the server. :-)

2020-12-15
19:37 Reply: Thanks for SQLite version 3.34.0 (artifact: 30e6b733ab user: jicman)

I am also just as surprised. Let me research this a bit, and come back to you. Thanks.

18:59 Reply: Thanks for SQLite version 3.34.0 (artifact: 4d38f309b9 user: jicman)

Also, loading records is also sooo much faster. I don't have a number because I was not paying attention to the importing piece, but I just noticed that is about 4 times faster. Thanks, and thanks.

josé

2020-12-14
21:29 Post: Thanks for SQLite version 3.34.0 (artifact: 7ef3cd5fe6 user: jicman)

I wanted to say thanks to the SQLite team for releasing v3.34.0. This version has speed up the opening of my application from 11+ seconds to less than 5 seconds. This is on a shared drive also. So, I appreciate your work.

josé

2020-11-30
14:51 Reply: Use of `.excel` under Cygwin (artifact: 63c93cdf1a user: jicman)

Perhaps a \\ instead of single?

>>Unable to start 'C:cygwin64tmpetilqs_VRZheZvmh6MvCTU.csv':...

>>Failed: [xdg-open C:\cygwin64\tmp\etilqs_VRZheZvmh6MvCTU.csv]

try 

C:\\cygwin64\\tmp\\etilqs_VRZheZvmh6MvCTU.csv

or better yet, use the cygdrive option

/cygdrive/c/cygwin64/tmp/etilqs_VRZheZvmh6MvCTU.csv

2020-11-24
20:03 Post: Sqlite site link suggestion: "Coming soon!" (artifact: aadd190cec user: jicman)

Greetings!

Dr. Hipp, I find myself always looking ahead of what is coming on the next version of SQLite. I end up typing /draft after the site, and then having to take it back to the "real" site by deleting "/draft/index.html". Perhaps, there should be a link to the right of "Purchase" called "Coming soon", which will simply change the link to https://www.sqlite.org/draft/index.html, and then another one called "Back to reality" that would take it back to www.sqlite.org. I know typing /draft is nothing more than 2 seconds, but just having those links there makes so much sense to me. :-) Flame away. :-) Thanks.

josé

PS: The names of the links provided are just suggestions. I am sure there are better terms for it.

2020-09-16
20:01 Reply: Error: no such column: x (artifact: bd14e3aab1 user: jicman)

Thanks, Keith.

18:25 Reply: Error: no such column: x (artifact: ec21423792 user: jicman)

Thanks, David. Long story short, my data keeps growing and some of the queries that were fast at the beginning of the year are becoming slower and slower. I am trying to speed them up with the novice SQL talent I possess. So, this query, SELECT a.*,sum(b.AnnualDossier) as Dossier FROM Project_List AS a LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID AND b.InsertDate = (SELECT MAX(InsertDate) FROM ABT_Budget WHERE b.ProjID = ProjID) WHERE a.ProjID IN ( SELECT ProjID FROM Project_List WHERE 1=1 AND PMO_Board_Report != 'No' AND ( ( Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15' ) OR ( Target_Go_Live_Date = '' AND Finish_Date > substr(date('now'),1,4) || '-01-15' ) ) AND InsertDate = ( SELECT MAX(InsertDate) FROM Project_List WHERE 1=1 AND PMO_Board_Report != 'No' AND ( ( Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15' ) OR ( Target_Go_Live_Date = '' AND Finish_Date > substr(date('now'),1,4) || '-01-15' ) ) ) ) AND a.InsertDate = ( SELECT MAX(InsertDate) FROM Project_List WHERE a.ProjID = ProjID ) group BY a.ProjID; Takes about 12-20 seconds depending, Run Time: real 12.539 user 10.765625 sys 0.468750 I am trying to speed it up by following Keith's suggestion which is to get the ProjID, and both InsertDates ahead of time, and using their values in the query. Using one of Keith's examples, I was able to minimize the response time from 30 secods to less than 2 seconds on another query. So, I was trying to follow the same idea to try to fix the query above in which I failed miserably. So, I will go back to the drawing board and try to re-do this one and see if I can get any speed, and also the same output.

16:11 Reply: Error: no such column: x (artifact: bedc95ae84 user: jicman)

Thanks, Keith. I am trying this one that you provided with another select, sqlite> with maxes (ProjID, pl_insert, ab_insert) ...> as ( ...> select ProjID, ...> (select max(InsertDate) from Project_List where ProjID == o.ProjID) as pl_insert, ...> (select max(InsertDate) from ABT_Budget where ProjID == o.ProjID) as ab_insert ...> from ( ...> select distinct ProjID ...> from Project_List ...> ) as o ...> ) ...> SELECT a.*,sum(b.AnnualDossier) as Dossier FROM Project_List AS a ...> LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID ...> AND ...> b.InsertDate = o.ab_insert ...> AND ...> a.PMO_Board_Report != 'No' ...> AND ...> ( ...> ( ...> a.Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15' ...> ) ...> OR ...> ( ...> a.Target_Go_Live_Date = '' AND a.Finish_Date > substr(date('now'),1,4) || '-01-15' ...> ) ...> ) ...> AND ...> a.InsertDate = o.pl_insert ...> WHERE a.ProjID = b.ProjID ...> group BY a.ProjID ...> ; Run Time: real 0.031 user 0.000000 sys 0.000000 Error: no such column: o.ab_insert sqlite> Why do I get no such column? Thanks.

josé

01:20 Post: Error: no such column: x (artifact: 753ccc246d user: jicman)

Greetings!

Please take a look at the following query:

WITH maxes(ProjID, pl_insert, ab_insert) as ( SELECT ProjID, MAX(InsertDate), (select max(InsertDate) from ABT_Budget where ProjID = e.ProjID) FROM Project_List e group by ProjID ) SELECT a.*,sum(b.AnnualDossier) as Dossier FROM Project_List AS a LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID AND b.InsertDate = e.ab_insert AND a.PMO_Board_Report != 'No' AND ( ( a.Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15' ) OR ( a.Target_Go_Live_Date = '' AND a.Finish_Date > substr(date('now'),1,4) || '-01-15' ) ) AND a.InsertDate = e.pl_insert group BY a.ProjID;

And if I execute this command,

sqlite> WITH maxes(ProjID, pl_insert, ab_insert) ...> as ( ...> SELECT ...> ProjID, ...> MAX(InsertDate), ...> (select max(InsertDate) from ABT_Budget where ProjID = e.ProjID) ...> FROM Project_List e ...> group by ProjID ...> ) ...> SELECT * from maxes; ProjID|pl_insert|ab_insert PR0000003811|2019-02-12_12-12-12| ...clip...

Why do I get no such column? Any help would be greatly appreciated. Thanks.

josé

2020-09-11
02:51 Reply: Help speeding up this query (artifact: de5fd6cd83 user: jicman)

I am happy with the 2 seconds or less result. Thanks.

josé

02:49 Reply: Help speeding up this query (artifact: dc3ee7e419 user: jicman)

Just curious, how long does the query take now?

The original query took,

Run Time: real 15.901 user 0.390625 sys 2.125000

or longer, and the fixed query where Keith was instrumental with his fix, took,

Run Time: real 1.275 user 0.062500 sys 0.312500

Thanks.

2020-09-10
19:41 Reply: Help speeding up this query (artifact: 7186030e88 user: jicman)

You are amazing, Keith.

Run Time: real 1.275 user 0.062500 sys 0.312500
versus the old query:
Run Time: real 15.901 user 0.390625 sys 2.125000
It will take me years to understand what you did. Well, I actually understand it. But to be able to come up with it. I am actually going to study it because I have other queries that need the same fix. Thanks.

josé

16:48 Reply: Help speeding up this query (artifact: be649968c3 user: jicman)

Thanks Gunter. I can fix, at least, 3 of your suggestions because I understand them. :-) I may come back for more later. :-) Gracias.

josé

13:28 Post: Help speeding up this query (artifact: 834237d27a user: jicman)

Greetings!

Any thoughts on how I can make this query faster? I would really appreciated. Thanks. SELECT a.*,b.*,c.* FROM Project_List AS a LEFT JOIN Project_Dashboard AS b ON a.ProjID = b.ProjID LEFT JOIN Project_Extras AS c ON a.ProjID = c.ProjID AND a.PMO_Board_Report != 'No' AND a.Status = 'Acknowledged' AND Project_Type != 'Outside IT perimeter' AND a.Start_date < date('now') AND ( (a.Target_Go_Live_Date > substr(date('now'),1,4) || '-01-15') OR (a.Target_Go_Live_Date = '' AND a.Finish_Date > substr(date('now'),1,4) || '-01-15') ) AND ( b.TimeIndicator = '0' OR b.CostIndicator = '0' ) AND a.InsertDate = (SELECT MAX(InsertDate) FROM Project_List WHERE a.ProjID = ProjID) AND b.InsertDate = (SELECT MAX(InsertDate) FROM Project_Dashboard WHERE b.ProjID = ProjID) AND c.InsertDate = (SELECT MAX(InsertDate) FROM Project_Extras WHERE c.ProjID = ProjID) WHERE a.ProjID = b.ProjID AND a.ProjID = c.ProjID group BY a.Manager, a.ProjID ORDER BY a.Manager, a.ProjID ; This is what I get from the .expert function:

(no new indexes)

MULTI-INDEX OR
INDEX 1
SEARCH TABLE Project_Dashboard AS b USING INDEX PDash_TimeIndicator (TimeIndicator=?)
INDEX 2
SEARCH TABLE Project_Dashboard AS b USING INDEX PDash_CostIndicator (CostIndicator=?)
CORRELATED SCALAR SUBQUERY 2
SEARCH TABLE Project_Dashboard USING COVERING INDEX PDash_ProjID_InsertDate (ProjID=?)
SEARCH TABLE Project_List AS a USING INDEX PL_ProjID_InsertDate (ProjID=?)
CORRELATED SCALAR SUBQUERY 1
SEARCH TABLE Project_List USING COVERING INDEX PL_ProjID_InsertDate (ProjID=?)
SEARCH TABLE Project_Extras AS c USING INDEX sqlite_autoindex_Project_Extras_1 (ProjID=?)
CORRELATED SCALAR SUBQUERY 3
SEARCH TABLE Project_Extras USING COVERING INDEX sqlite_autoindex_Project_Extras_1 (ProjID=?)
USE TEMP B-TREE FOR GROUP BY

josé

2020-09-09
11:55 Reply: Switching data between two columns (artifact: a94c7d9dc8 user: jicman)

Thanks, Gunter. Another fine response.

josé

2020-09-08
19:32 Reply: Switching data between two columns (artifact: 527a3f603f user: jicman)

Thanks, Keith. Perfectly explained.

josé

19:09 Reply: Switching data between two columns (artifact: da7133b9f5 user: jicman)

This works perfectly. But I was thinking, and I am asking just to understand the engine behind the process, this statement,

   UPDATE t SET c3=c4, c4=c3;
uses temporary storage, correct? Because if the SQL goes from left to right, c3 will be over-written before it passes the old value to c4. So, something like this is happening in the background:
some-internal-temp = c3;
c3 = c4;
c4 = some-c-temp;
Just a newbie lack of knowledge question. Thanks.

josé

17:15 Reply: Switching data between two columns (artifact: b7c37b1219 user: jicman)

Man, and I was going to do a whole python script to do this. You guys are the best. Thanks Dr. Hipp and David.

josé

16:55 Post: Switching data between two columns (artifact: 9ef0677c76 user: jicman)

Greetings!

I messed up! Nothing new in my life. :-) So, I inserted the wrong data in two columns. I know how to fix the problem with a small program, but I was wondering if there was a quick SQL call that I could do to fix the problem. The real problem is that I have a table called t and I have inserted the data that should have gone to column c3 on c4. And I also inserted the data that should have gone to c4 on c3. The desire is to exchange the data found on c3->c4 and the data found on c4->c3. There are two constraints: ProjID and InsertDate. The table definition is thus,

CREATE TABLE t
    (
      ProjID,
      c1,
      c2,
      c3,
      c4,
      c5,
      InsertDate,
      PRIMARY KEY (ProjID,InsertDate)
    );
Thoughts? thanks.

josé

2020-08-15
01:17 Reply: v3.33.0 x64 DLL pre-compile binary for Windows (artifact: ef7550ff2f user: jicman)

Thanks, Dr. Hipp.

2020-08-14
15:09 Post: v3.33.0 x64 DLL pre-compile binary for Windows (artifact: 65730e87b9 user: jicman)

Greetings!

the download site for v3.33.0 has v3.32.3 for the x64 DLL pre-compiles binaries for windows instead of v3.33.0. Any reason why there is not a v3.33.0 DLL for x64? Thanks.

josé

2020-06-11
14:35 Edit reply: Request: stored functions/procedures (artifact: 375f58f1a2 user: jicman)

Who's getting up for work and going, "My goodness I can't wait to write some PL/SQL today!"

Fair point.... But then, which language do you get up in the morning for, with that song in your heart?

This is funny. :-) Thanks.

14:32 Reply: Request: stored functions/procedures (artifact: 1fab151e71 user: jicman)

This is funny. :-) Thanks.

2020-06-03
17:16 Reply: SQLite turns 20 (artifact: 3ebe6b187f user: jicman)

Apologies for been late to the party. Thanks and congrats to Dr. Hipp and the team.

17:12 Reply: SQLite turns 20 (artifact: c3d6195f5b user: jicman)

Classic Keith! :-)

2020-05-12
19:20 Reply: Forum suggestion (artifact: cefc9828aa user: jicman)

From my side, I want to see them right away. But, I can adjust and be patient. ;-)

18:50 Reply: Thoughts on why this is not working? (artifact: b0a331ad9c user: jicman)

Is the "empty" methodology NULL or a blank string? If NULL, then the comparison <> should be IS NOT ...

Hmmm, that is the problems. Believe it or not, ss both. I get the data from a system, that apparently, would choose '' on some specific times, or null in others. For example, for this same SELECT, it works for some projects, but for others, it was not. So, I was killing myself trying to figure out why. So, I asked. But, then I figured that it must have been the data coming in with different values for nothingness: null or ''. So, coalesce is now working correctly. Thanks.

josé

18:26 Reply: Thoughts on why this is not working? (artifact: 40e55a4ab0 user: jicman)

Never mind. Apologies. For those of you playing at home, I had to use coalesce to bring non-null value. So, this now brings something,

sqlite> SELECT ProjID, ...> Updated_By, ...> InsertDate, ...> var, ...> oldv, ...> newv ...> FROM ...> ( ...> SELECT ProjID, ...> Updated_By, ...> InsertDate, ...> 'Methodology' as var, ...> ( ...> SELECT coalesce(Methodology,'null') FROM Project_List WHERE ...> ProjID = o.ProjID ...> AND InsertDate < o.InsertDate ...> ORDER BY InsertDate DESC ...> LIMIT 1 ...> ) AS oldv, ...> coalesce(Methodology,'null') as newv ...> FROM Project_List as o ...> ) ...> WHERE oldv <> newv ...> AND ProjID = 'PR0000014752' ...> ORDER BY InsertDate ASC; PR0000014752|Guzman Alcanfor, Pedro|2020-03-13_06-29-19|Methodology|null|Waterfall sqlite>

thanks for your good thoughts.

josé

18:01 Post: Thoughts on why this is not working? (artifact: fa0bb7d523 user: jicman)

Greetings!

This command brings,

sqlite> select ProjID, Methodology,InsertDate from Project_List where ProjID='PR0000014752' ORDER BY InsertDate; PR0000014752||2018-07-30_05-43-20 PR0000014752||2018-08-01_04-41-32 [clip] PR0000014752||2020-03-06_06-51-32 PR0000014752||2020-03-09_08-51-14 PR0000014752||2020-03-10_06-56-06 PR0000014752|Waterfall|2020-03-13_06-29-19 PR0000014752|Waterfall|2020-03-18_12-06-38 PR0000014752|Waterfall|2020-03-19_08-14-33 sqlite> Then, this command, should bring an entry, sqlite> SELECT ProjID, ...> Updated_By, ...> InsertDate, ...> var, ...> oldv, ...> newv ...> FROM ...> ( ...> SELECT ProjID, ...> Updated_By, ...> InsertDate, ...> 'Methodology' as var, ...> ( ...> SELECT Methodology FROM Project_List WHERE ...> ProjID = o.ProjID ...> AND InsertDate < o.InsertDate ...> ORDER BY InsertDate DESC ...> LIMIT 1 ...> ) AS oldv, ...> Methodology as newv ...> FROM Project_List as o ...> ) ...> WHERE oldv <> newv ...> AND ProjID = 'PR0000014752' ...> ORDER BY InsertDate ASC; sqlite> But it does not. Thoughts? Complaints? Jokes? :-) Thanks.

josé

2020-05-08
21:04 Reply: Help: Setting a column after finding first desired entry (artifact: 2bb7e4c1a1 user: jicman)

Do you mean something like this:

You are so good. Thanks. Yep, exactly that.

josé

18:36 Post: Help: Setting a column after finding first desired entry (artifact: 3192191686 user: jicman)

The title does not really tells the whole story, but two versions of a software caused some discombobulation in the data, and I am trying to fix the problem. Imagine this scenario:

create table t0 (a, b, idate, PRIMARY KEY (a,idate)); insert into t0 (a, b, idate) values ('p001', '', '2019-02-11'); insert into t0 (a, b, idate) values ('p002', '', '2019-02-11'); insert into t0 (a, b, idate) values ('p003', '', '2019-02-11'); insert into t0 (a, b, idate) values ('p004', '', '2019-02-11'); insert into t0 (a, b, idate) values ('p005', '', '2019-02-11'); insert into t0 (a, b, idate) values ('p001', 'n', '2019-02-12'); insert into t0 (a, b, idate) values ('p002', 'n', '2019-02-12'); insert into t0 (a, b, idate) values ('p003', 'y', '2019-02-12'); insert into t0 (a, b, idate) values ('p004', 'n', '2019-02-12'); insert into t0 (a, b, idate) values ('p005', 'y', '2019-02-12'); insert into t0 (a, b, idate) values ('p001', '', '2019-02-13'); insert into t0 (a, b, idate) values ('p002', '', '2019-02-13'); insert into t0 (a, b, idate) values ('p003', '', '2019-02-13'); insert into t0 (a, b, idate) values ('p004', '', '2019-02-13'); insert into t0 (a, b, idate) values ('p005', '', '2019-02-13'); insert into t0 (a, b, idate) values ('p001', '', '2019-03-14'); insert into t0 (a, b, idate) values ('p002', '', '2019-03-14'); insert into t0 (a, b, idate) values ('p003', '', '2019-03-14'); insert into t0 (a, b, idate) values ('p004', '', '2019-03-14'); insert into t0 (a, b, idate) values ('p005', '', '2019-03-14');

What I am trying to do is to update b with the first b value where b is not '' for the rest of the records after that first found b value != ''. So, for all idate > than the idate where the first non '' value was found. So, I thought that this would work:

UPDATE t0 SET b = ( select b from t0 where b != '' GROUP BY a ) AND a = ( select a from t0 where b != '' GROUP BY a ) AND idate > ( select idate from t0 WHERE b != '' GROUP BY a ) ;

But this does not work. It sets all values to 0:

sqlite> select a, b, idate from t0 where b != '' group by a; p001|0|2019-02-11 p002|0|2019-02-11 p003|0|2019-02-11 p004|0|2019-02-11 p005|0|2019-02-11

Any thoughts, help would be greatly appreciated.

2020-05-02
00:32 Reply: Week numbers in queries (artifact: ab84737d84 user: jicman)

Hi.

Take a look at this: 8:20:46.21>sqlite3 SQLite version 3.30.1 2019-10-10 20:19:45 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> SELECT strftime('%W','2019-01-01'); 00 sqlite> SELECT strftime('%W','2019-01-02'); 00 sqlite> SELECT strftime('%W','2019-01-03'); 00 sqlite> SELECT strftime('%W','2019-01-04'); 00 sqlite> SELECT strftime('%W','2019-01-05'); 00 sqlite> SELECT strftime('%W','2019-01-06'); 00 sqlite> SELECT strftime('%W','2019-01-07'); 01 sqlite> SELECT strftime('%W','2020-01-01'); 00 sqlite> SELECT strftime('%W','2020-01-02'); 00 sqlite> SELECT strftime('%W','2020-01-03'); 00 sqlite> SELECT strftime('%W','2020-01-04'); 00 sqlite> SELECT strftime('%W','2020-01-05'); 00 sqlite> SELECT strftime('%W','2020-01-06'); 01 sqlite> SELECT strftime('%W','2020-01-06');

It may be that what you think week 17 is , may not be calculated as week 17. It depends a lot on how you want to define a 'week' and what you are going to use it for. I had this problem once, and some of the folks here kindly guided me correctly. Play around with the dates, and you'll see some holes. I hope this helps.

josé

2020-04-28
19:07 Reply: WAL mode speed vs normal comparison (artifact: 765f7bace2 user: jicman)

... you are just calling in a plastic surgeon to do a "little nip and tuck here in the belly" and "let it hang out more over there in the ass".

I am rolling on the floor laughing. Thanks, Keith. This made my day. Ha ha ha ha! Wait, let me laugh in Spanish. It's more fun: Ja ja ja ja ja...

18:58 Reply: WAL mode speed vs normal comparison (artifact: 94c6df78d7 user: jicman)

And the table would very likely be mis-representative (or downright wrong) for every combination of OS, filesystem, physical storage type, sqlite3 version, and data workloads other than the precise ones it was tested on.

I guess I am not explaining the idea well. We're not defining deep storage, OS, FS, versions, etc., here. If it is presented right, a nice vanilla table should suffice. Of course, in the end, there should be a final statement with that table explaining the exercise and provide what it is trying to do: show a difference on how WAL speed is better in some scenarios than other..., etc. Please test your scenario before jumping into conclusion, etc., etc." It does not have to go deep into any specific area, but just plain vanilla SQLite WAL processing. However, it will help point #1 carry weight.

18:34 Reply: WAL mode speed vs normal comparison (artifact: 8d9d6b5b52 user: jicman)

If you have a just one thread of one process of one program accessing the database, and a ton of memory and filespace free, WAL is significantly faster

That's actually something I'd like to see proven somehow.

This is the reason why I asked. If the first point of the explanation of the WAL site states that it is faster in most scenarios, it would be nice to have a little side by side WAL vs Normal scenarios. Where scenarios may be:

  • 1 writer, 1 reader, 100K records,
  • 1 writer, 5 readers, 100K records
  • 2 writers, 10 readers, 100K records

then,

  • simple write, search, and replace
  • semi-complex write, search, and replace
  • complex write, search and replace

with their appropriate response speed. A nice table like that would make people love it or super love it. :-) Just thinking out-loud...

josé

17:21 Reply: WAL mode speed vs normal comparison (artifact: 65c6ba6786 user: jicman)

How would you know it's faster?

Good question. The first point of the overview, is "WAL is significantly faster in most scenarios." So I asked. Is that point wrong?

16:47 Post: WAL mode speed vs normal comparison (artifact: 117c91891c user: jicman)

Greetings!

I have been reading about WAL mode, and I did a search on the forum for 'wal mode speed vs normal' and got nothing. Has anyone done a speed comparison of the same WAL mode db, and a normal DB? I know it's faster, but how faster are we talking? Thanks.

josé

sqlite speed comparison report faster slower

2020-04-22
17:31 Reply: RENAME COLUMN incomplete documentation (artifact: e62a31b12d user: jicman)

Keith, I am so glad you're here...! Thanks.

josé

2020-04-16
20:58 Reply: Help speeding up a query (artifact: 27ff27d078 user: jicman)

Thanks, Keith. I am going to take some time to jump a bit deeper into both to get to understand how to make queries slim, compact and faster. Thanks for your college course. It is really appreciated.

josé

19:43 Reply: Help speeding up a query (artifact: adf2a69978 user: jicman)

Yes, indeed. Here is the actual code:

sqlite> SELECT *, max(InsertDate)
   ...>     FROM Project_List
   ...>    WHERE ProjID IN (SELECT DISTINCT ProjID FROM Project_List WHERE Project_Name LIKE '%agr-2020%')
   ...> GROUP BY ProjID;
PR0000015542|6687299|..
..
..LICATIONS|e608313|2020-04-15_08-47-09|Waterfall|2020-04-15_08-47-09
Run Time: real 0.671 user 0.109375 sys 0.359375
sqlite>

This is perfect, Keith. Once again. Thanks.

josé

PS: I am glad you're here... Thanks.

18:13 Post: Help speeding up a query (artifact: d25a6aee9d user: jicman)

Greetings!

Pardon the long message. First, I hope you and all yours are well, and as soon as you read this post, please go and wash your hands. :-)

Long story short, I created a tool that reports on projects. They have an AccessDB that is slow, but I created an SQLite tool to make my own reports, etc. All is working great, but I have a few queries that are getting slower and slower. I run ANALYZE every Sunday night, and it's still just as slow.

A little history...
We export data in Excel from a global project's tool, and we import only those that belongs to our department into and use SQLite to report on various criteria. As things were running well, they requested more reports, and now, daily exports and imports, to the point that we are importing daily and, sometimes two or three times in a day. Imagine this scenario...

create table t0 (a, b, c, d, e, idate, PRIMARY KEY (a, idate));
insert into t0 (a, b, c, d, e, idate) values ('p001', 'Project I', 1000.0, 'a', 3, '2018-02-11');
insert into t0 (a, b, c, d, e, idate) values ('p002', 'Project II', 2039., 'a', 3, '2018-02-11');
insert into t0 (a, b, c, d, e, idate) values ('p003', 'Project III', 30023.0, 'b', 4, '2018-02-11');
insert into t0 (a, b, c, d, e, idate) values ('p001', 'Project I', 1010.0, 'b', 4, '2018-03-01');
insert into t0 (a, b, c, d, e, idate) values ('p002', 'Project II', 2039.0, 'c', 5, '2018-03-01');
insert into t0 (a, b, c, d, e, idate) values ('p003', 'Project III', 30023.0, 'd', 5, '2018-03-01');
...
...
insert into t0 (a, b, c, d, e, idate) values ('p001', 'Project I', 900.0, 'e', 5, '2020-03-20');
insert into t0 (a, b, c, d, e, idate) values ('p002', 'Project II', 3423.0, 'd', 6, '2020-03-20');
insert into t0 (a, b, c, d, e, idate) values ('p003', 'Project III', 40000.0, 'a', 6, '2020-03-20');
insert into t0 (a, b, c, d, e, idate) values ('p001', 'Project I', 900.0, 'f', 7, '2020-03-21');
insert into t0 (a, b, c, d, e, idate) values ('p002', 'Project II', 3500.0, 'g', 6, '2020-03-21');
insert into t0 (a, b, c, d, e, idate) values ('p003', 'Project III', 40000, 'h', 6, '2020-03-21');

As you can see, these import change a few time in the life of the project, but not daily. There are also about 56 fields, and I have thought of optimizing the data, but that would take lots of code changes, and extra effort. Although, that may need to be done in the future, but for right now, I just want to be able to minimize the query response time. I have also set an INDEX for a and idate:

CREATE INDEX IF NOT EXISTS a_idate_idx on t0 (a,idate);
The queries are related to searches on values other than a or idate. For example,
SELECT * FROM t0 as rec
WHERE rec.a IN 
(
  SELECT a FROM t0 WHERE 1=1 
  AND lower(rec.b) LIKE '%ioc-net%' 
)
AND
  rec.idate = 
    (SELECT MAX(idate) FROM t0 WHERE a = rec.a)
GROUP BY rec.a;

When I run this query on the real data, this is the result of .timer on:

Run Time: real 243.291 user 240.125000 sys 0.765625

I have tried to use .expert, but it tells me that (no new indexes) are needed. Right now the total number of records for t0 is 56984. Yearly, we do about 140 projects per year. So, there is a lot of duplicated data assigned to a, and idate. Any thoughts on how to make this scenario faster? Thanks for your help.

josé

2020-04-15
22:30 Reply: can not subscribe (artifact: b40f65b780 user: jicman)

Also weird. There are two addresses you would havbe to mark as trusted (or put in your address book :

The sender address

sqlite-forum@sqlite.org and the envelope-from address: root@sqlite.org

These were added, and I even sent 3 different emails content to the last guy I exchanged emails with. So, there was definitely a problem. I had rules that he asked me to create, and that didn't work either. Long story, and lots of wasted effort with MS.

josé

14:19 Edit reply: can not subscribe (artifact: f44c64ea82 user: jicman)

I have been trying to tell you folks that outlook has sqlite site blocked. I have an email with @outlook.com and none of these get to my inbox. They go to the spam folder. I have written rules, I have set the from address to safe senders, nothing works. I had to go back to yahoo to get your emails.

I also had 3 different support request with microsoft, and they all asked me to ask Mr. Hipps to contact them. I said, "That man has no time to be dealing with you guys. You need to talk to your technical folks and tell them that the SQLite site is being blacklisted." Nothing was done. So, I am now back to yahoo because of Microsoft blacklisting the sqlite IP.

josé

More ↓