SQLite Forum

Timeline
Login

50 most recent forum posts

2021-02-26
18:48 Reply: Ability to: SELECT * EXCEPT(some,columns) (artifact: 2885cf250c user: larrybr)

Your kind of "stable" is difference from Jay's. You mistake repeated results from identical circumstances for Jay's stability, which would apply across query optimizer revisions and DBMS choices as to naming of columns lacking an AS qualifier.

18:07 Reply: Ability to: SELECT * EXCEPT(some,columns) (artifact: ad13d323c6 user: anonymous)

I disagree.

Any query regardless of how it is formed (sub-queries, joins, whatever) has a stable column output when doing SELECT *.

Do you get the columns in different order each time the query is run (without changes)? I don't think so.

Once the query is finalized you know the order of the output and the respective names (even generated ones, e.g., expressions that you can always alias to simpler ones).

agp

17:44 Reply: Ability to: SELECT * EXCEPT(some,columns) (artifact: 3616d9a25f user: jkreibich)

Such a syntax would be very difficult to make stable. In some cases the column order is not well defined, especially with complex JOINs using the "USING" syntax, or in self-JOINs, where the same column name exists in multiple places within the query.

In true Relational Theory form, columns are unordered, just as rows are. While the first clause of a SELECT statement defines the column ordering for the output, there shouldn't be strong assumptions about the column ordering at the end of the FROM processing.

17:39 Reply: How do I create a "rolling cursor" ? (artifact: 2cfa137263 user: wmertens)

Keysets won't work for sorting in different directions. They're also unnecessary, syntactic sugar (unless they can be optimized somehow).

I implemented general cursoring using nested and/or, you can see the code at https://github.com/StratoKit/strato-db/blob/master/src/JsonModel/JsonModel.js#L375-L390

The way it works is as follows: given 3 columns a, b, c you sort on, you keep values v0, v1, v2 of the last returned result, and use it to find the next value with

a >= v0 && (a != v0 || (b >= v1 && (b != v1 || (c > v2))))

if b is sorted descending the condition becomes

a >= v0 && (a != v0 || (b <= v1 && (b != v1 || (c > v2))))

if you sort on an extra column d you need to replace the last condition with

c >= v2 || (c != v2 || (d > v3))

This is in one direction. If you want to page backwards, you have to invert the cursor conditions and sorting.

So basically, you have to do a limit query, remember the last values and page by generating a nested comparision condition while retaining the sorting.

Cheers,

Wout.

17:08 Post: BUG Precompiled sqlite shell for windows can't open filenames with unicode chars (artifact: 99788ac1c2 user: zby1234)

Hello,

Thank you for developing SQLite! However, I found the precompiled sqlite shell for windows which is downloaded from offical site (sqlite-tools-win32-x86-3340100.zip), can't open db files with unicode characters in its file name.

Steps to reproduce:

  1. Create a test database 'test.db'
  2. Rename 'test.db' to '你好世界.db' (or try some strange names such as emojis)
  3. Try to open '你好世界.db' with sqlite shell and do some query.

What happened:

  1. If the unicode char in filename can be represented in current code page:

The shell will silently create a new db file named '�������.db' and operate on that new file.

  1. If the unicode char in filename can't be represented in current code page:

The shell will try to create a new db file named '????.db' and say filename is illegal.

My thoughts:

It's seems like unicode file name support in sqlite shell is broken. Although I found shell.c uses wmain() and is unicode aware, it seems the precompiled binary continue to use the ANSI versions of API. May be unicode support isn't compiled in?

16:20 Reply: How do I create a "rolling cursor" ? (artifact: 29b4c6a491 user: kmedcalf)

Some products do this "automagically" and call it a "keyset driven cursor".

Basically, for any query, one generates a temporary "keyset" which comprises the primary keys of all the tables involved in the query which one wants to scroll, and then uses this simple keyset to regurgitate the results on request.

The keyset may be stored on the client side or on the server side. The result is stable as of the time of generation of the keyset. New rows created subsequent to the creation of the keyset will not be included in the keyset. Rows deleted from the source tables may leave holes.

The query which generates the "keyset" would look like this (for a server-side keyset):

CREATE TEMPORARY TABLE keyset AS
SELECT <list of primary key columns from each table involved in cursor>
  FROM <list of tables wanted to cursor over>
 WHERE <list of join conditions between tables>
ORDER BY <list of fields you want the keyset ordered by>
;

Then, each time you want a specific "row" or "page of rows" from the keyset driven cursor you use the following query:

SELECT <the columns I want>
  FROM keyset
LEFT JOIN <table> ON <primary key for table>
LEFT JOIN <table> ON <primary key for table>
... for as many tables as participated in the original keyset generation ...
 WHERE keyset.rowid >= :startrow
ORDER BY keyset.rowid
 LIMIT :numrows
;

When you are done with the keyset, you drop the temporary table.

So, for example, say the query you want to paginate/scroll is:

SELECT a.doofus, w, x, y, z
  FROM a, b
 WHERE a.doofus == b.dingbat
ORDER BY w, x, y, z;

then you generate the keyset as follows:

CREATE TEMPORARY TABLE keyset AS
SELECT a.rowid as "a.rowid",
       b.rowid as "b.rowid"
  FROM a, b
 WHERE a.doofus == b.dingbat
ORDER BY w, x, y, z;

and you retrieve your paginated results as:

SELECT a.doofus, w, x, y, z
  FROM keyset
LEFT JOIN a ON a.rowid == "a.rowid"
LEFT JOIN b ON b.rowid == "b.rowid"
 WHERE keyset.rowid >= :startat
ORDER BY keyset.rowid
 LIMIT :numrows
;

When you are done you can delete the keyset. The data actually displayed in the paginated results reflects the "current view" of the data although the cursorination reflects the point-in-time at which the keyset was created.

If you wish the keyset result to be stable, you have to hold open a transaction during the whole process (not recommended).

15:23 Reply: How do I create a "rolling cursor" ? (artifact: e105286906 user: curmudgeon)
SELECT t.*
  FROM ids
  JOIN t ON t.rowid = ids.id
 WHERE ids.rowid >= ?1 
 ORDER BY ids.rowid
 LIMIT n
;

Agreed Ryan, that is better and maybe less confusing than what I suggested.
15:12 Reply: How do I create a "rolling cursor" ? (artifact: 17ad7a9b39 user: anonymous)

I might have missed the point but it seems to me that you want to select records at random from a given table. If no, ignore else, try this:

SELECT * FROM tblThisTable ORDER BY RANDOM() LIMIT 2;

The LIMIT argument specifies the number of records; you can randomize this also.

15:04 Reply: How do I create a "rolling cursor" ? (artifact: 34fd6cfa8b user: cuz)

This is by far the best/easiest solution to do what you want to do.
I'm adding just a pointer or two upon curmudgeon's suggestion:

I wish to point out that in that CREATE query (for the ids) you are free to sort, filter, limit in any way imaginable to satisfy all your listed requirements. An example might be:

CREATE TEMP TABLE ids AS
SELECT rowid as id
  FROM t
 WHERE t.somecolumn IS NOT NULL AND t.someothercolumn != 'John'
 ORDER BY t.yetanothercolumn DESC;

For reading the page/section of rows needed, this was suggested:

select * from t where rowid in
(select id from ids where rowid between ?1 and ?1+n order by rowid);

but I think the next example might be better (not depending on the IN optimization, more legible, and at least in my testing proven to be the fastest of what I've tried - if someone can show a faster method, I would like to know please!):

SELECT t.*
  FROM ids
  JOIN t ON t.rowid = ids.id
 WHERE ids.rowid >= ?1 
 ORDER BY ids.rowid
 LIMIT n
;

I'd also like to add that the stated CONs are very much true, but unlikely to be factors in a program/app where this kind of view query would typically be used.
If they are factors for your intended project, you need to revisit the entire idea of doing this kind of pagination thing.

13:38 Reply: Ability to: SELECT * EXCEPT(some,columns) (artifact: 284ac5e2ce user: anonymous)

+1 on the idea.

(This is also useful when trying to get all but the generated (virtual) columns.)

But, even this method also has problems if you want both to include many columns while excluding many others. You still have to type a lot explicitly (either the ones to include or the ones to exclude).

I would propose a much simpler and more versatile syntax that would use ranges, something like:

SELECT COLA TO COLJ, COLP, COLS TO COLZ

where all columns between the two specified are included in the order they appear in the table definition.

agp

13:12 Reply: How do I create a "rolling cursor" ? (artifact: 338042fee4 user: anonymous)
    select textcol, numcol, rowid from tablename
        where (coalesce(textcol, ''), coalesce(numcol, 0), rowid) > (coalesce(?1, ''), coalesce(?2, 0), ?3)
        order by coalesce(textcol, ''), coalesce(numcol, 0), rowid
        limit ?4;

For without rowid tables, every column of the primary key is guaranteed not to be null, so append all primary key columns that the order clause doesn't already mention.

13:09 Reply: How do I create a "rolling cursor" ? (artifact: c951d18cea user: curmudgeon)
Store the rowids of the table in a temp table and use them to retrieve sections of the table. e.g. suppose you want to view a table t sorted on col s.

create temp table ids as select rowid as id from t order by s;

If you want to view records i..i+n of the sorted table use this

select * from t where rowid in
(select id from ids where rowid between ?1 and ?1+n order by rowid);

Use bind to set ?1 to i.

If you'd rather store the id's (faster) in an array (a) or vector (v) you can do the same thing using the carray extension

https://sqlite.org/carray.html#:~:text=The%20carray%20()%20function%20is%20not%20compiled%20into,first%20added%20to%20SQLite%20in%20version%203.14%20(2016-08-08)

and execute

select * from t where rowid in carray(?1,n);

binding ?1 to a+i (array case) or v.data()+i (vector case).

Cons:
You're taking a snapshot of t so if others are using separate connections to insert or delete records from t there will be id values missing or no longer valid in the ids table.
For large tables (as in tens of millions of records) storing the rowids can sometimes be sluggish.
12:49 Reply: How to get the extension path? (artifact: a0913880ee user: anonymous)
On a Linux box I have stored the shared libraries as follows:
/usr/lib/sqlite3/ {all shared libraries in use}

To load the shared libraries automatically modify the file (or create one) called:
.sqliterc

I use the following shared libraries and added these lines:
.load /usr/lib/sqlite3/pcre
.load /usr/lib/sqlite3/libuic

If you write a program that uses the SQLite API you have to load these libraries using the API load_extension function.
12:29 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: f4b26d5266 user: anonymous)
All is well now!

The second path as mentioned before is correct and expected because the program and the database are in the same directory.

To be able to get it to work, I have to use: "sqlite3_open_v2(path, &db, SQLITE_OPEN_READWRITE, NULL)", to open the database.

Thanks all. Up to the next hurdle on this rather bumpy road.
12:09 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: f65b89a0f3 user: anonymous)
@ Larry Brasfield,
This remark: "Is the current directory for your program's process the same as the directory in which "the_insert.db" resides? And is that the very same database that you are sure has a table named "someTable"?" made me thinking; and looking in the directory where Xcode used to put the executable.

Guess what!

1. Database and path created by Xcode:
/Users/janhkila/Library/Developer/Xcode/DerivedData/SQLite_Multiple_Insert_00-dksaovkjvqrrrzdodwcmnprrbxey/Build/Products/Debug/the_insert.db

2. Database and path created by me:
/Users/janhkila/SQL-Lite/the_insert.db

So, there are two identical databases "the_inser.db" both with a table "someTable", only in different directories.
Thank you very match Larry! And off course mr. Apple/Xcode.  Well not really.
12:03 Reply: Building a DIY libsqlite3.so (artifact: 55ec2bce51 user: anonymous)
Success!!! After a small victory dance I set down to check the version of the fresh compiled lib. You've probably guessed it — dramatic pause — v3.33.0. 

It's a déjà vu… (and a Freudian race condition of several anxieties.)
11:29 Reply: Update Query (artifact: 54a2577694 user: adrian762)

Ryan

thank you for that. Your answer was much more sympathetic. As stated I thought it was just a case of learning how to update multiple items. I will put the effort in and sincerely thank you for a more cogent ( and dare I say) non pompous answer.

You have explained that it was a really difficult problem which, at the time, not knowing i thought it was easy.

I will read and understand

Once again, thanks for your time and understanding

adrian

10:45 Reply: Building a DIY libsqlite3.so (artifact: 22fe703d6a user: wyoung)

Add -lm to the end of the build command.

10:30 Reply: Building a DIY libsqlite3.so (artifact: 8fb7c115ea user: anonymous)
Thank you. I wasn't sure if that was the correct library because I ended up for this:

/usr/bin/ld: /tmp/ccYRFinB.o: in function `fts5Bm25Function':
sqlite3.c:(.text+0x3486f): undefined reference to `log'
collect2: error: ld returned 1 exit status

I am sorry that I am doing this step by step, but I want to be sure I didn't miss something that is important
10:17 Post: How to get the extension path? (artifact: 7ffc71acf0 user: anonymous)

Hi,

I’m building a SQLite loadable extension, and this extension will read some other txt files. I will put the extension and txt file in the same folder, but how should I get the extension’s abs path?

I search the web, different platform have different solution. I’m wondering if we have a unified way in sqlite? Or what’s the recommend way to do this?

Thank you so much!

10:00 Reply: Building a DIY libsqlite3.so (artifact: 835cc1e024 user: larrybr)

When you #define HAVE_READLINE, you should have the readline library. On Ubuntu, installing the package libreadline-dev suffices.

09:43 Post: How do I create a "rolling cursor" ? (artifact: 3cdcf846ae user: RWieser)

Maybe my last question was too specific. So, I'm asking it a bit broader :

How do I write a "rolling cursor" query with which I can browse(1) a random sqlite3 table ?

(1) Meaning that I, while browsing, can change the direction at will. A few page downs, than a page up or two and than page downs again. You know, like flipping thru a book.

A simple question, yes ? No, not really. :-|

.

I know that I can "just" provide all fields of the current record and than use a WHERE to get the next or previous set of records, but ...

it has to keep working with columns containing NULL entries.

It has to keep working even the stored datatype doesn't match that of the column.

I need to be able to sort on at least one column - even when that column contains screenfulls of rows containing the same data. (using the rowid as the last WHERE colum is an easy solution - but that doesn't work for a 'without rowid' table)

Also, I have no intention to store upto a megs worth of (possible) data for each displayed field, just so I can send that data back in next query. IOW, I will store/display a limited ammount of data for each field and the search query needs to reflect that.

08:04 Reply: Building a DIY libsqlite3.so (artifact: 6d0a8014f7 user: anonymous)

Thisd line Add -DHAVE_READLINE and the -lreadline and -lncurses libraries to get command-line editing support. That's exactly what I want. So I used this command:

gcc -Os -I. -DSQLITE_THREADSAFE=0 -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS5 -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_RTREE -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DHAVE_USLEEP -DHAVE_READLINE shell.c sqlite3.c -ldl -lreadline -lncurses -o sqlite3

Seems to me, that I checked all the boxes.

07:52 Reply: Building a DIY libsqlite3.so (artifact: b1779cb9c0 user: wyoung)

The end of section 2 of that doc covers readline.

07:43 Reply: Building a DIY libsqlite3.so (artifact: 26273ec99f user: anonymous)
That is the one I found. Unfortunately,I ran into a snag.

shell.c:122:11: fatal error: readline/readline.h: No such file or directory

The directory is not in snapshot. Oopsie?
07:10 Edit reply: Building a DIY libsqlite3.so (artifact: e0825a6e50 user: wyoung)

The docs?

(First result from a web search for "build sqlite", by the way.)

07:09 Reply: Building a DIY libsqlite3.so (artifact: a454bd6c44 user: wyoung)
07:03 Post: Building a DIY libsqlite3.so (artifact: 4ffbd23def user: anonymous)
I've got the source code (snapshot 202101022356) and build SQLite3 v.35.
Nice!

Now I want to build a new shared library (with and without debug information). Is there a procedure I should follow?
05:37 Edit reply: Tracing why SQLITE is making an index decision (artifact: 0c26a9d9ea user: wyoung)
  1. You're probably using different versions of SQLite on these two machines. A given SQLite version should be deterministic in its behavior.

  2. In a recent version of the shell, give the ".eqp on" command and issue your query. Post the results.

  3. Ditto, but give the ".expert" command first. It will give a simplified version of the prior command's output, more aimed at the end-user finding their own solutions than having others give them answers.

05:36 Reply: Tracing why SQLITE is making an index decision (artifact: 16ea7c5744 user: wyoung)
  1. You're probably using different versions of SQLite on these two machines. A given SQLite version should be deterministic in its behavior.

  2. In a recent version of the shell, give the ".eqp on" command and issue your query. Post the results.

  3. Ditto, but give the ".expert" command first. It will give a simplified version of the prior command, more aimed at the end-user finding their own solutions than having others give them answers.

05:28 Edit reply: Convert SQLite to JSON (artifact: 13b5a9f707 user: wyoung)

Given SQLite 3.33.0 or higher:

  C:\> sqlite3 -json file.db "select * from MyTableName" > file.json
05:28 Reply: Convert SQLite to JSON (artifact: 353d837aff user: wyoung)

Given SQLite 3.33.0 or higher:

  C:\> sqlite3 -json file.db "select * from MyTableName"
05:04 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: 02c7e692ee user: anonymous)
Well I activated the code that uses the SQLite prepare_v2 API function. All works fine except that this introduced to new memory leaks.

To avoid a discussion here is the destructor:
    ~DBase()
    {
        sqlite3_free(prepared_SQL_stmt);
        sqlite3_free(ErrMessage);
        sqlite3_close(DB);
    }

This may provoke the ire of the SQLite team:
Executing sqlite3_free() calls without a sqlite3_malloc() is weird.
But I will do (almost) everything to deal with the memory leaks. All the leaks are related to the libsqlite3.so.0.8.6 (SQLite v3.30) and the address 0x483C7F3 (this is a dynamic address of this shared library) that executes a malloc().
03:15 Post: Tracing why SQLITE is making an index decision (artifact: e5930ce14f user: deonb)

I have two almost identical machines each running almost exactly the same SQLITE3 database. Each of the machines have the same 2 indexes. On one machine it's choosing one index, on the other it's choosing the other index to satisfy the same query. And it eludes me as to why.

The query is a simple:

select Id from Table where DateStart > xxx and DateEnd < yyy and Flag = 1

The index that I want has a 'WHERE' clause:

DateStart, DateEnd WHERE Flag = 1
The other index (that's chosen on one device but not the other) just has DateStart and a bunch of unrelated columns.

Both of the database have had ANALYZE run on them. The sqlite_stat1 entries for the two indexes are identical.

Is there perhaps a way I can list the set of index choices SQLITE has for a table and see why it's preferring the one index over the other? (Even in a debugger with a breakpoint will help).

00:47 Reply: Update Query (artifact: f229f464d2 user: cuz)

Adding to what Larry rightly said:

Please do not be dismayed, we honestly want to help, it's our favorite thing to do.

The problem you are trying to solve (in this thread) is a difficult problem. It requires an in-depth understanding of some specific SQL principles. So asking us for the answer, but at the same time saying "please be gentle, I'm new to SQL" leaves us with not much we can do.

It's like if you asked us to help you set up an experiment to study the quantum tunneling effect in transistors under 7nm, but then added: "please keep in mind I'm new to electronics and physics, simple solutions only pls.". What can we possibly tell you to help? Almost any solution we propose requires years of study to just understand the words we would use.

My solution was to point you to literature where your problem was already solved, exactly in the easiest way possible (which is also what Larry tried to point out). Saying that you do not understand the literature is understandable, but not acceptable. You need to redouble the effort to read it and understand it, and if needed, perhaps read up on other things mentioned in that literature so that you form a better understanding of the concepts they use. There is no simple, quick or easy way. Put another way: We can point out and help with problems in your understanding of the subject, but we cannot understand it for you.

If you really want to learn your own way, I suggest putting down the DB, and try coding your own method in your own programming language of choice for relating binary trees (such as human ancestry) into a structure/linked list/dictionary/whatever. Once you understand how hard that is (or rather, once you see the light on what would be the easy way to do it) you will probably suddenly understand the article I linked to and also then realize why that is by far the easiest way, even if it seemed complicated at the start.

Once you understand all that, you will come back here and ask about the best way to do that in SQLite, and you and us will be on the same page and we will be able to help you, for then you will understand SQL graphs in principle and we can simply help with the implementation detail.

As an aside...

For someone who is a beginner, you sure chose a hard first hill to climb. We would in no way discourage you though, we say please do it, but do it right, put in the effort to learn, it will pay great dividends in the long run. Lastly, at the risk of sounding like my Dad: Once you wrestle through learning the problem and it finally clicks, the confidence you will have gained from that to tackle the next difficult problem with, is of immeasurable life value.

We all assume (perhaps unjustly) that the above is understood by people who post here, so out of respect for readers, we try keep our answers short and to the point (unlike this one!). Please know that our answers are rarely intended as put-downs1, we simply wish to keep it short, technical and unfluffy.

[1] Unless a putdown offers comedic value or more friendly replies went unheard. :)

00:38 Reply: Convert SQLite to JSON (artifact: 2f1dd2dacb user: d3x0r)

Depends on what you're familiar with; Node.JS with sack.vfs can do it pretty simply... just open a database const db = sack.Sqlite('db.db') and select something... JSON.stringify( db.do( 'select something from a_table'))

https://www.npmjs.com/package/sack.vfs

https://github.com/d3x0r/sack.vfs/blob/master/tutorial/tutorial-sqlite.md

2021-02-25
23:48 Reply: Update query update (artifact: 5cf39b4c1d user: cuz)

We'd like to help, but we are busy people. If I have to go search the forum for your previous post, I'm quite likely to just skip it.

Also, remember that we do not know you, we do not know your project, or your code, we have extremely little to go on. We very much depend on you giving us an extremely clear view of your problem, else reading your question leaves us with much more questions than obvious solutions - and typically we do not want to play 20 questions to tease out of you exactly what is going on.

What we do however want to do, sincerely so, is help solve your problem. Why else would we bother reading this forum? Your post however doesn't provide us with a problem to solve, but rather a list of things we have to do/ask to get full info, it's so much easier to just move on to the next post.

Please feel free to edit the above post (or add a new reply to it), which restates the problem more clear. Show your schema (your DB's CREATE statements) any SQL you've tried that doesn't work, or works but is not doing exactly what you want.

That way we understand what you are trying to achieve, what you have tried, and can hopefully assist with better ideas or SQL statements that work.

23:06 Post: Convert SQLite to JSON (artifact: a087daa48e user: anonymous)

Hello! I'm trying to convert SQLite tables into JSON files from Windows command-line (but completely from command line.. I need to create a batch later) without success. Can anyone explain which would be the correct command to achieve this? Thanks!

22:05 Post: Ability to: SELECT * EXCEPT(some,columns) (artifact: 4eda933733 user: anonymous)

Is there any plan to add the feature of selecting all columns except certain ones?

Google's BigQuery has this feature and it's supremely handy at times when you want to grab dozens or hundreds of columns from a large table while excluding only a handful. Otherwise you'd have to explicitly write out all the columns.

Example:

-- Select all columns except "COLUMN_ONE"
SELECT * EXCEPT(COLUMN_ONE)
FROM SOME_TABLE
20:49 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: dea113dc15 user: larrybr)

You should pass SQLITE_OPEN_READWRITE as the 3rd argument to sqlite3_open_v2(). The problem is that the older sqlite3_open() API auto-creates a database, but it begins life as an empty vessel.

But this is no help at all because I can asure that this table is in place.

Doubtful. However, make your case that the said table truly is in the database opened or auto-created with your sqlite3_open() call. I can assure you that there are one or more holes in that argument.

Is the current directory for your program's process the same as the directory in which "the_insert.db" resides? And is that the very same database that you are sure has a table named "someTable"?

20:12 Reply: sqlite3_prepare_v2 does not seem to work. (artifact: b1de49f3c4 user: anonymous)
The message returned is: "SQL logic error: no such table: someTable".
I had that message before when I used only: sqlite3_errmsg(db).
But this is no help at all because I can asure that this table is in place.
Trouble is; I can't put a picture here to prove it.
There can of course be a "SQL logic error", but if you see it, tell me please.
20:01 Reply: Update Query (artifact: 5af49e7ea7 user: larrybr)

I'm going to overlook your feelings as an issue for you alone to deal with. I will only say that my focus has been on the facts here without regard to what you might wish they were or what regard you might wish to have.

The article to which Ryan referred you is well written and covers the same problem you seek to sort-of-solve. If you focus on the problem first, then on the solution details and mechanics, you may develop a more useful conceptualization of what you are trying to do at a high level. (How to set a value within a row is the least challenging of the issues.) Your own problem will become easier to deal with if you can get your mind around the graph concept and its representation(s) in relational databases.

You gave no sign that you tried "Ryan's answer". If you had done that, either you would be well on your way toward a solution or you would have some implementation questions.

You have been given plenty of slack. There is not an easy answer. There are only more-or-less harder answers. The approach you insist upon is like digging a ditch with a teaspoon. If you go to a ditch-digger's forum asking, "How can I best fling dirt with my teaspoon?", you are unlikely to get genuine tips on that.

I'll leave you with a rhetorical question: How is your representation going to deal with my 2nd uncle who is also my 2nd cousin? (Don't answer for me; answer for yourself.)

19:47 Reply: DBStat returns wrong values for compressed databases (artifact: d0785d4276 user: dan)

pgoffset and pgsize describe the compressed page as stored on disk. The other fields describe the uncompressed page in memory.

Dan.

19:06 Reply: Update Query (artifact: ae9a2b354e user: adrian762)

Larry

As a beginner I found the answer Ryan gave difficult to understand. I am sorry to have caused work for people - it was not my intention. I was aware and did try Ryans answer but maybe didnt understand it and there was no unwillingless there.

As a beginner I had hoped I may have given some slack and support and there was an easy answer. to me not knowing, I thought it was an easy answer.

I did feel your reply was a putdown and not the tone of support i had hoped for thank you for taking the time to reply

adrian

18:54 Reply: Reporting performance issue (artifact: 065b5b63dc user: kmedcalf)

Naive changes to support the two optimizations above fix most of these examples. But there are a couple of tricks. For example, query 1753 features the equivalent of:

CREATE TABLE t1(a, b);
CREATE TABLE t2(x, y NOT NULL);
SELECT * FROM t1 LEFT JOIN t2 WHERE t1.a=t2.x AND y IS NULL;

In the query, the LEFT JOIN can be transformed to a regular join (because t1.a=t2.x implies that t2.x cannot be NULL) which allows the database to determine that "y IS NULL" will always be false. But currently, by the time SQLite realizes it can transform the join it's tricky to tell exactly which LEFT JOIN (there may be many in the query) column "y" came from.

I would say that this case (and the one following) are even simpler. There is no ON condition attached to the RHS table of the left join which means that the join type cannot be an outer join and must be an inner join and this can be done even before examination of the where clause.

18:43 Reply: Update Query (artifact: a8f4177c22 user: larrybr)

Thanks for answering.

Perhaps a little more explanation would help.

Ryan gave you the best advice you are likely to get, yet you go on to explain how you are trying to solve a low-level problem that vanishes when the graph concept is understood and used appropriately.

To be blunt: You are trying to solve the higher level, "real" problem with a schema and approach that is unsuitable to the task. If I had hours to spare, not better used more productively, I could explain why that is true. You may see it yourself if you look carefully at the link Ryan provided and think about the problem and how the approach and schema described there map to the real-world data you intend to model, manipulate, and query.

It's not an easy problem, so going about it the hard way is not going to draw much interest here, particularly when you demonstrate an unwillingness to consider the ways what others have solved similar problems.

BTW, if you post any more SQL or (or other code) then please learn to use the <code> and </code> tags. (Click the source button to see an example. Or see the markdown rules. )

18:28 Reply: Reporting performance issue (artifact: a09916531a user: jinhojun)

We appreciate your feedback on our report.

From our evaluation, we have other queries that also showed a slow-down. Based on your feedback, we will apply a filter rule to remove all queries that have the same reason described above, and then we will report new cases if they are not from the same root cause.

Thanks, Jinho Jung

18:18 Reply: Update query update (artifact: 960bc150b7 user: larrybr)

When you have something meaningful to add to an existing thread, you do others and yourself a favor by keeping the additions within that thread. Context is easier to see that way. "See previous post" is extra effort.

Generally, in "Help Available" fora, it is considered rude to create new threads just to boost visibility. That's just one effect; the others are to motivate other participants to tune out your posts.

Upon reading your posts, I think you and others willing to help you would be aided by studying "How do I ask a good question?"(https://stackoverflow.com/help/how-to-ask), or other write-ups on the topic.

17:40 Post: DBStat returns wrong values for compressed databases (artifact: 75309c110a user: anonymous)

When using dbstat on a database which has been compressed (tested with ZV-zstd and ZV-zlib) dbstat returns wrong values (e.g. unused is bigger than pgsize).

name path pageno pagetype ncell payload unused mx_payload pgoffset pgsize
sqlite_master / 1 leaf 4 1041 64368 617 4946 487
table1 / 3 leaf 25 291 65137 22 1010 382
table2 / 4 leaf 1 2 65519 2 2001 36
table3 / 5 leaf 4 442 65074 111 4723 217
sqlite_stat1 / 6 leaf 3 92 65424 37 5482 115
17:27 Post: Update query update (artifact: b47f95010f user: adrian762)
Hi all

I posted about an update query a few days ago and as it has dropped down list i thought would post again in a slightly different format

my problem in a nutshell that i can query two tables and I want to update say just eight of the tables values. They are in the result of the query.  When I try to update it says it can only update first one in the list

If i could do a loop in SQLLite i could do it. 

Please can someone help so i can update, say, these 8 items.  The eight that are selected depend on the query.  I can update everything in the table which is not what i want to do

As a beginner I have spent days on this and I am sure it can be done

For more details please see previous post

thanks in advance for helping

Adrian
More ↓