SQLite Forum

Timeline
Login

50 events by user casaderobison occurring around 2021-10-26 21:30:05.

2021-12-16
17:51 Edit reply: Can I assume that the TRIGGER runs from within my transaction? artifact: cdb81b8c60 user: casaderobison

I'm not above chain yanking on occasion, I was just trying to interpret the words I was reading. I kind of thought the same when you said "yes" to my reformulation of the updated question, since I posted an either / or scenario. My interpretation of that was "yes, one of those two things happens, and I'm not committing to one or the other, which means it is undefined as to which path is taken". It is not unlike how I'll answer questions from my children at times. "Can we do X or do we have to do NOT X?" they might ask, and I'll reply "Yes."

Note that I've rarely used triggers, and didn't look up documentation, was just trying to be helpful by clarifying what I thought the question really was and what your answer really meant.

17:50 Reply: Can I assume that the TRIGGER runs from within my transaction? artifact: ac7c56de91 user: casaderobison

No worries! If this is the worst thing that happens to me today, it'll have been a really good day. :)

17:00 Reply: Can I assume that the TRIGGER runs from within my transaction? artifact: 1f64725d36 user: casaderobison

I'm not above chain yanking on occasion, I was just trying to interpret the words I was reading. I kind of thought the same when you said "yes" to my reformulation of the updated question, since I posted an either / or scenario. My interpretation of that was "yes, one of those two things happens, and I'm not committing to one or the other, which means it is undefined as to which path is taken". It is not unlike how I'll answer questions from my children at times. "Can we do X or do we have do NOT X?" they might ask, and I'll reply "Yes."

Note that I've rarely used triggers, and didn't look up documentation, was just trying to be helpful by clarifying what I thought the question really was and what your answer really meant.

08:52 Reply: Can I assume that the TRIGGER runs from within my transaction? artifact: d9507d24f8 user: casaderobison

So to summarize: If one wants to ensure that a SQL query and a related trigger are run within the same transaction, put them in an explicit transaction. Anything else is presumably an implementation detail which could in theory change in the future and is not guaranteed, much as row order from a query is not guaranteed unless one explicitly requests a given order.

08:36 Reply: Can I assume that the TRIGGER runs from within my transaction? artifact: fbaa14f807 user: casaderobison

All true, but I think the gist of the question was: "In the case of automatic transactions, will my SQL statement run in the same transaction as the trigger, or will the SQL statement be run in one transaction which is committed before the trigger runs in a separate transaction?"

2021-12-12
15:28 Reply: SQLite3 select (with table scan) not saturating NVMe throughput artifact: dac45863a4 user: casaderobison

Only the fields aaa and ccc are set. ccc contains about 1.2 kb of data per row.

I believe this probably best explains your speed numbers. Someone will correct me if I'm wrong, but the way SQLite encodes data tries to keep all the row data together, only spilling over to other pages if necessary. A row of data in subdoc is going to be about 1.3k including overhead for empty columns. I believe the default page size is currently 4k, so you're only going to be fitting about 3 rows per page. With 64k pages, that will be improved to about 49 rows per page. More rows in each page means fewer pages and thus fewer page reads.

00:35 Reply: SQLite3 select (with table scan) not saturating NVMe throughput artifact: 097971a28e user: casaderobison

Given the numbers you report for various tests, I think we can safely say that SQLite is in line with what you are seeing. SQLite will be doing random reads, even if the pages are contiguous in the file. The fact that it is outperforming 4kb 1 queue random reads demonstrates that it is relatively efficient.

What size pages are you using? I don't remember reading that previously.

It would be interesting to know if vacuuming the database would improve performance.

2021-12-11
22:47 Reply: SQLite3 select (with table scan) not saturating NVMe throughput artifact: 6ca41ebe33 user: casaderobison

This sounds like a hardware issue (you were sold a "bill of goods" because you did not ask the right questions).

The OP writes that the stated performance has been measured by the OP, so it seems unlikely that manufacturer marketing is responsible for this situation. Possible, but unlikely. I think there just isn't enough information surrounding the use case, schema, and query to make any determination.

22:37 Edit reply: SQLite3 select (with table scan) not saturating NVMe throughput artifact: 79dd2dfbb9 user: casaderobison

It would be useful to see the schema and the query to have any hope of offering advice on how to increase performance. I suspect you just need a better index so the query planner can process data more effectively.

Edit: Oh, I see now your concern has more to do with the amount of bandwidth consumed, not the speed of the query. I still think more information will be useful, but I doubt any program that processes data will ever be able to max out the throughput of such a device. It takes time to read one or more pages to find the data, unpacking the rows along the way to do something useful with them.

22:34 Reply: SQLite3 select (with table scan) not saturating NVMe throughput artifact: 2ea98e5e6c user: casaderobison

It would be useful to see the schema and the query to have any hope of offering advice on how to increase performance. I suspect you just need a better index so the query planner can process data more effectively.

2021-11-25
16:20 Edit reply: Sqlite high IO load makes usage impossable artifact: 0c592dd114 user: casaderobison

So much this. A few years ago I had to refactor some code that was doing this very thing. Every query for a single row was built from scratch, then prepared, stepped, and finalized. A database update could take over 24 hours (update in this case meaning re-writing the database to change the schema between version 1 and version 2 of the application).

When I rewrote it to reuse queries across multiple rows, our migration time went down to 5 to 15 minutes for a 5 or so GB database.

There is a ton of overhead preparing queries. It is to your advantage to create queries that can be prepared once and used over and over again.

16:19 Reply: Sqlite high IO load makes usage impossable artifact: 45e56dd186 user: casaderobison

So much this. A few years ago I had to refactor some code that was doing this very thing. Every query for a single row was built from scratch, then prepared, stepped, and finalized. A database update could take over 24 hours (update in this case meaning re-writing the database to change the schema between version 1 and version 2 of the application).

When I rewrote it to reuse queries across multiple rows, our migration time when down to 5 to 15 minutes for a 5 or so GB database.

There is a ton of overhead preparing queries. It is to your advantage to create queries that can be prepared once and used over and over again.

2021-11-21
00:07 Edit reply: IS NULL optimization on NOT NULL constraints breaks on corrupted databases artifact: afe0cc3b66 user: casaderobison

It seems to me that multiple people in this thread have been fixated on the reproduction example he gave vs the spirit of the report. The original report did see a legitimately corrupted database that did not use the undocumented and undefined behavior of rewriting the schema, but writing the schema was used to provide a simple test case. We always appreciate reproduction steps.

As a result, there was a dependence on NOT NULL on a column that by definition could not be NULL, but due to the bug was now NULL.

This results in a paradox. The schema says "this column can't be NULL" yet "something is NULL". So what is the query planner to do? The QP can do anything at all at that point because by some definition, anything it does will be wrong.

I think what SQLite is doing is perfectly acceptable and the real problem is the "legitimately corrupted database" they were dealing with. I can see an argument for "engine should always sanity check data" but then the query planner can never optimize anything because it can no longer make assumptions about validity. "NOT NULL" seems like a simple case, but one can imagine very complex CHECK constraints that would be even harder to verify at SELECT time for every query.

The most sensible thing is "INSERT is responsible for ensuring constraints are enforced" and that the rest of the engine can safely assume the data meets that criteria. If something is deliberately or accidentally corrupted, I don't think it should be SELECT/UPDATE/DELETE's job to enforce that.

But the question expressing surprise is not unreasonable, and the writable schema was a red herring.

00:06 Reply: IS NULL optimization on NOT NULL constraints breaks on corrupted databases artifact: 67d4a053fa user: casaderobison

It seems to me that multiple people in this thread have been fixated on the reproduction example he gave vs the spirit of the report. They did see a legitimately corrupted database that did not use the undocumented and undefined behavior of rewriting the schema.

As a result, there was a dependence on NOT NULL on a column that by definition could not be NULL, but due to the bug was now NULL.

This results in a paradox. The schema says "this column can't be NULL" yet "something is NULL". So what is the query planner to do? The QP can do anything at all at that point because by some definition, anything it does will be wrong.

I think what SQLite is doing is perfectly acceptable and the real problem is the "legitimately corrupted database" they were dealing with. I can see an argument for "engine should always sanity check data" but then the query planner can never optimize anything because it can no longer make assumptions about validity. "NOT NULL" seems like a simple case, but one can imagine very complex CHECK constraints that would be even harder to verify at SELECT time for every query.

The most sensible thing is "INSERT is responsible for ensuring constraints are enforced" and that the rest of the engine can safely assume the data meets that criteria. If something is deliberately or accidentally corrupted, I don't think it should be SELECT/UPDATE/DELETE's job to enforce that.

But the question expressing surprise is not unreasonable, and the writable schema was a red herring.

2021-11-20
11:41 Reply: IS NULL optimization on NOT NULL constraints breaks on corrupted databases artifact: 07d271b96c user: casaderobison

To add to what drh wrote, from the documentation for sqlite3_get_table() (both of which he probably also wrote):

NULL values result in NULL pointers. All other values are in their UTF-8 zero-terminated string representation as returned by sqlite3_column_text().

So the documented interface says it is possible for the table to include NULL values and it does not allow wiggle room for "unless the schema" exceptions.

2021-11-19
22:38 Reply: Hot backup database in WAL mode by coping artifact: 8dfd8df07e user: casaderobison

Based on what you're describing, a "better" solution (for some fuzzy value of better) sounds like running a .dump to a pipe that can feed into a compression program. Do you have SQLite shell access or does this have to be internal to an application? If it must be internal, you could replicate what .dump does.

17:58 Reply: Hot backup database in WAL mode by coping artifact: 61cabcc8dd user: casaderobison

If you have a file X that is 1 MiB in size, and you make a copy of it, you now have 2 x 1 MiB files.

Why does the method of copy matter? If you use copy or cp or backup API or VACUUM, I am not sure I understand the difference. Is it just because of "live DB + journal" overhead you briefly have twice as much as you want?

2021-11-17
18:33 Reply: Hot backup database in WAL mode by coping artifact: d19a1b562e user: casaderobison

Have you considered VACUUM INTO 'filename'?

2021-11-13
00:33 Reply: Making the SQL standard public artifact: 244a668e02 user: casaderobison

I'm not really opposed to charging for documents, but it should be a fairly minimal charge. PDF versions of documents should be competitive with what you would pay for a printed book of comparable size.

2021-11-03
21:00 Edit reply: Get SQLITE_READONLY when using UPDATE command artifact: c4f57ce7d6 user: casaderobison

Forward slash was supported by the DOS API starting with version 2 of DOS, as previous versions of DOS did not support hierarchical directories (as Keith noted in a subsequent post).

IBMs desire to have forward slash serve as the command line parameter switch character starting with DOS 1.whatever meant that you could type things like:

A>DIR/W

And it would be recognized as a DIR command followed by no white space and command arguments.

As the customer, IBM did not want DOS 2 to break that convention that many people had some to expect, and allowing forward slash as a path separator would add ambiguity to parsing of the command line: does DIR/W mean show a wide directory or does it mean run the W program in the subdirectory named DIR?

So yes, the engineers at MS knew of the convention that forward slash was a path separator, but they were enhancing a system based on CP/M conventions and IBM imposed requirements. It's always been an issue from the command line, but never from the API from the time DOS began supporting hierarchical directories.

20:59 Reply: Get SQLITE_READONLY when using UPDATE command artifact: f9a32ac9d3 user: casaderobison

Forward slash was supported by the DOS API starting with version 2 of DOS, as previous versions of DOS did not support hierarchical directories.

IBMs desire to have forward slash serve as the command line parameter switch character starting with DOS 1.whatever meant that you could type things like:

A>DIR/W

And it would be recognized as a DIR command followed by no white space and command arguments.

As the customer, IBM did not want DOS 2 to break that convention that many people had some to expect, and allowing forward slash as a path separator would add ambiguity to parsing of the command line: does DIR/W mean show a wide directory or does it mean run the W program in the subdirectory named DIR?

So yes, the engineers at MS knew of the convention that forward slash was a path separator, but they were enhancing a system based on CP/M conventions and IBM imposed requirements. It's always been an issue from the command line, but never from the API from the time DOS began supporting hierarchical directories.

2021-11-02
07:33 Reply: NFS 4 that is guaranteed to work with Sqlite artifact: 60f5237a7c user: casaderobison

I don't know what the team's answer will be, but I think the answer is "they cannot offer any guarantee for any system they do not control". NFS 4 may have a standard that requires compliance with correctly implemented lock implementations, but there is nothing stopping some team from writing an implementation that cuts corners to get ahead of the competition.

The POSIX standard requires that the malloc function returns null when an allocation cannot return a pointer to a block of the given size, leaving programs to believe they have all the resources they need. Linux by default ignores that requirement.

If an NFS 4 implementation violates the requirement, or subsequently breaks the requirement, there isn't really anything the SQLite team can do to help provide a guarantee.

2021-10-26
21:32 Edit reply: Configuring SQLite to wait indefinitely artifact: fbe2f75316 user: casaderobison

Based on the phrasing, it seems what is wanted is a "wait forever" that semantically says what is being sought vs "wait for a really long but finite time". Something that in code would be immediately obvious as the difference between "wait until something is available" and "wait until something is available or the timer elapses".

To that end, defining a custom busy handler would be the way to do that as described.

21:30 Reply: Configuring SQLite to wait indefinitely artifact: 5eab27608c user: casaderobison

Based on the phrasing, it seems all that is a "wait forever" that semantically says what is being sought vs "wait for a really long but finite time". Something that in code would be immediately obvious as the difference between "wait until something is available" and "wait until something is available or the timer elapses".

To that end, defining a custom busy handler would be the way to do that as described.

2021-10-20
18:26 Edit reply: INDEX usage on inequality and IN artifact: dd71bd32c5 user: casaderobison

Others have already mentioned analyze so I won't repeat that here.

One other complication though for using an index when using inequality matching is "where is the inequality in the set". Using your example, let's say type has four possible values, 1, 2, 3, and 4.

SELECT * FROM table WHERE type != 1
SELECT * FROM table WHERE type != 2

The first of these example queries will return all rows where type is 2, 3, or 4, which would be a contiguous set of index entries.

The second of the examples will return all rows where type is 1, 3, or 4, which would be two sets of index entries (get all rows that match type 1, skip all rows that match type 2, then get all rows that match type 3 & 4). Certainly this can be done, but it complicates the query planner based on a number of variables, especially if using prepared statements.

One way to rewrite the query:

SELECT * FROM table WHERE type < 2 OR type > 2

That might better indicate to the query planner that it can divide it up into two subqueries. Or perhaps even better:

SELECT * FROM table WHERE type < 2
UNION ALL
SELECT * FROM table WHERE type > 2

There are other possibilities that have been recommended in other responses.

In the end, you can't assume that the query plan is legitimate unless you are running it against the actual data. It is not enough to simply have a schema and explain a query plan, as the SQLite query planner takes a lot more information into account.

18:25 Reply: INDEX usage on inequality and IN artifact: 2811f9486e user: casaderobison

Others have already mentioned analyze so I won't repeat that here.

One other complication though for using an index when using inequality matching is "where is the inequality in the set". Using your example, let's say type has four possible values, 1, 2, 3, and 4.

SELECT * FROM table WHERE type != 1 SELECT * FROM table WHERE type != 2

The first of these example queries will return all rows where type is 2, 3, or 4, which would be a contiguous set of index entries.

The second of the examples will return all rows where type is 1, 3, or 4, which would be two sets of index entries (get all rows that match type 1, skip all rows that match type 2, then get all rows that match type 3 & 4). Certainly this can be done, but it complicates the query planner based on a number of variables, especially if using prepared statements.

One way to rewrite the query:

SELECT * FROM table WHERE type < 2 OR type > 2

That might better indicate to the query planner that it can divide it up into two subqueries. Or perhaps even better:

SELECT * FROM table WHERE type < 2 UNION ALL SELECT * FROM table WHERE type > 2

There are other possibilities that have been recommended in other responses.

In the end, you can't assume that the query plan is legitimate unless you are running it against the actual data. It is not enough to simply have a schema and explain a query plan, as the SQLite query planner takes a lot more information into account.

2021-10-19
15:37 Edit reply: INDEX usage on inequality and IN artifact: 49c57f4368 user: casaderobison

Imagine you have a table with 1000000 rows. The id column is guaranteed to be unique. So when you search for a specific id value, SQLite knows that it can perform a search in logarithmic time by using the index to find the one true record.

When you search for all rows that don't match a single id value, SQLite knows that you will match either all rows or all but one row. Thus the advantage to using the index does not exist when your query will return N or N-1 rows.

The same holds true for the type column without the uniqueness guarantee. The query plan might be different based on statistics, but at this point in time that is the best information that SQLite has available.

15:37 Reply: INDEX usage on inequality and IN artifact: 7b8214a5e1 user: casaderobison

Imagine you have a table with 1000000 rows. The id column is guaranteed to be unique. So when you search for a specific id value, SQLite knows that it can perform a search in logarithmic time by using the index to find the one true record.

When you search for all rows that don't match a single id value, knows that you will match either all rows or all but one row. Thus the advantage to using the index does not exist when your query will return N or N-1 rows.

The same holds true for the type column without the uniqueness guarantee. The query plan might be different based on statistics, but at this point in time that is the best information that SQLite has available.

2021-10-18
01:01 Reply: Column names and aliases in expressions artifact: cf3e43fb16 user: casaderobison

I know not all people object to "colorful metaphors" and language that is considered by some to be out of place in polite society, but you are a brilliant person, Keith. I personally would applaud you applying your writing skill to choosing more universally acceptable words when interacting in the community. I know I've seen such posts edited in the past, so I know I am not alone in this.

2021-09-22
18:46 Reply: cannot start a transaction within a transaction using sqlite shell artifact: aebedf4f85 user: casaderobison

Probably not since the queries are just being fed into the SQLite shell.

15:53 Reply: sqlite3 CLI .expert mode returns “Error: not an error”, if suggested index name is created, but does not match suggestion artifact: 0d66575cd4 user: casaderobison

I didn't think to try expert mode, but recently had to troubleshoot a query issue at work where they wanted a query to run every 1/10 of a second on a largeish dataset. Unfortunately, the query was taking about 1.1s to run, so just a little outside the 0.1s desired window. I was able to use explain query plan to figure out to add an index to each of two tables that got the query from 1.1s to 0.0001s. I should go back now and see what expert mode suggested.

2021-09-18
05:56 Edit reply: about "strict" mode artifact: 1f20993829 user: casaderobison

Keith, I don't know if you do this because you like to argue or because you aren't reading all the provided information. The original statement might have been imprecise, but pedantry doesn't help solve problems. The genesis of this thread branch would cause any reasonable person to infer that a table lacking an INTEGER PRIMARY KEY column lacks stable ROWID values across a VACUUM. This is a true statement: they aren't stable, whether the row was inserted with an explicit value or a SQLite generated value:

sqlite> create table a(b);
sqlite> insert into a (rowid,b) values (42,'foo');
sqlite> insert into a (b) values ('bar');
sqlite> select rowid,* from a;
42|foo
43|bar
sqlite> vacuum;
sqlite> select rowid,* from a;
1|foo
2|bar

Both ROWID values changed because there is no INTEGER PRIMARY KEY in the table, but stable ROWID values are required for some use cases.

I know you've said as much, but the way you "debate" the issue is confusing as you appear to contradict yourself. If the session extension didn't require a stable ROWID (as I understand an earlier reply) then this would be moot. But someone wants to use it with a different PRIMARY KEY and cannot. All the arguing in the world about how people should do things or insulting their lack of aptitude with how they use the tools provided doesn't solve the problem in this case. Sure, they could write their own equivalent to the session extension and circumvent this problem. But then they could also write their own SQL engine, and their own C standard library, text editor, operating system, and everything else in the software stack.

Presumably they are here to leverage the excellent resources available so they don't have to reinvent the wheel, and what (to them, anyway) is a minor tweak to improve utility is not an unreasonable suggestion. That doesn't mean it must be implemented, but everything else presented since then is a distraction from the original suggestion and is increasing the noise and depressing the signal.

05:54 Reply: about "strict" mode artifact: b9227841ba user: casaderobison

Keith, I don't know if you do this because you like to argue or because you aren't reading all the provided information. The original statement might have been imprecise, but pedantry doesn't help solve problems. The genesis of this thread branch would cause any reasonable person to infer that a table lacking an INTEGER PRIMARY KEY column lacks stable ROWID values across a VACUUM. This is a true statement: they aren't stable, whether the row was inserted with an explicit value or a SQLite generated value:

sqlite> create table a(b);
sqlite> insert into a (rowid,b) values (42,'foo');
sqlite> insert into a (b) values ('bar');
sqlite> select rowid,* from a;
42|foo
43|bar
sqlite> vacuum;
sqlite> select rowid,* from a;
1|foo
2|bar

Both ROWID values changed because there is no INTEGER PRIMARY KEY in the table, but stable ROWID values are required for some use cases.

I know you've said as much, but the way you "debate" the issue is confusing as you appear to contradict yourself. If the session extension didn't require a stable ROWID (as I understand an earlier reply) then this would be moot. But someone wants to use it with a different PRIMARY KEY and cannot. All the arguing in the world about how people should do things or insulting their lack of aptitude with how they use the tools provided doesn't solve the problem in this case. Sure, they could write their own equivalent to the session extension and circumvent this problem. But then they could also write their own SQL engine, and their own C standard library, text editor, operating system, and everything else in the software stack.

Presumably they are here to leverage the excellent resources available so they don't have to reinvent the wheel, and what (to them, anyway) is a minor tweak to improve utility is not an unreasonable suggestion. That doesn't mean it must be implemented, but everything else is a distraction from the original suggestion.

01:24 Reply: about "strict" mode artifact: 24ba2e0241 user: casaderobison

From https://www.sqlite.org/lang_vacuum.html:

"The VACUUM command may change the ROWIDs of entries in any tables that do not have an explicit INTEGER PRIMARY KEY."

Where one might define stable as "firmly established and unchangeable".

2021-09-17
00:28 Reply: Javascript enforcement artifact: 33814009b0 user: casaderobison

+1

2021-09-13
07:35 Reply: Wishing CLI could be usefully embedded artifact: bcbeda858f user: casaderobison

And anyone is free to refactor it into a useful library. It is in the public domain, after all. Promoting anything into the core API brings a long term commitment to maintain it.

As for it being an "ugly kludge" that depends on what they're looking for. It is used to good effect to provide the sqlite interface for fossil so that maintenance tasks that might not be first class features in fossil are still possible. And thus it exists for SQLite: The shell is a place for "second class" features to exist that provide various types of utility but that either don't belong in the core API or are not yet ready for the core API. I don't know if any of the latter have ever come pass (something introduced to the shell and later promoted to first class status).

02:09 Reply: Wishing CLI could be usefully embedded artifact: 8cd7a7a857 user: casaderobison

There are multiple dot commands that do not exist in the SQLite library but do exist in the shell. The ability to embed it in another application would expose that functionality without needing to pipe or otherwise redirect IO between processes. Some people prefer not to reinvent the wheel, even if it means taking a wheel off another car and using it on their own. Not unlike what fossil does to provide a sqlite command in its bag of tricks.

2021-09-10
22:59 Reply: 3.33.0 column mode breaks -noheader in CLI artifact: 8d0a0b3ada user: casaderobison

As for "prickly sometimes", you may already know this, but it isn't always clear to people just who is officially a member of the team speaking for the team, and who is just an opinionated person with varying degrees of bed side manner quality.

For that reason, it might be good for those who really do speak for the project to have some identification of that status. Then when I fly off the handle and insult someone's parentage*, they can safely ascribe my opinions to myself and not the project.

I do not mean to imply that non-team personnel shouldn't chime in and try to help! I have found that multiple sources of information can be helpful, because if I do not understand the first, I might understand the next. Just that it is useful to be able to filter out opinions of people that do not speak for the project from those who do. If I tell you "feature suggestion X is stupid and should never be implemented" it carries approximately no weight. If drh loves it and implements it, who cares what I think?

*I have no plans to insult anyone's parentage.

2021-09-09
19:43 Edit reply: SQLite3.exe fails to handle huge multi-line-comment artifact: 4b1b0e60c5 user: casaderobison

Absolutely. I wasn't being critical, just suggesting another reason why it would be a less than optimum idea to modify the shell to parse the text of SQL statements.

I understand it was a surprising situation, and you didn't "bother me". This is a forum where people share ideas and exchange information. If I were of a mind to, I could say "sorry for having bothered you by trying to share a rationale you might not have considered". But that would be a rude thing for me to do, so I won't.

It is sentences like your last that lead people to not want to be helpful, because it becomes simply easier to ignore than to engage. For the second time in recent history, I'm going to assume that was not your intent, but using words like "sorry to have bothered you" come off as sarcasm as though there is no "sorry" and "my exceptional finding" as though you deserve high praise for reporting a finding that came from feeding pathological text into a library.

It is good to know this so that people can be aware of the issue in the future (though the fact that it has never come up before this report suggests there aren't many people trying to use it this way; nevertheless, it is suboptimal that it behaves this way with unanticipated text). It is a reasonable idea that in almost every other possible circumstance it would be preferable to improve the performance of the shell program when such behavior is found.

I thought another perspective might have been helpful. There is no more or less to my comment.

19:39 Reply: SQLite3.exe fails to handle huge multi-line-comment artifact: bba6337482 user: casaderobison

Absolutely. I wasn't being critical, just suggesting another reason why it would be a less than optimum idea to modify the shell to parse the text of SQL statements.

I understand it was a surprising situation, and you didn't "bother me". This is a forum where people share ideas and exchange information. If I were of a mind to, I could say "sorry for having bothered you by trying to share a rationale you might not have considered". But that would be a rude thing for me to do, so I won't.

It is sentences like your last that lead people to not want to be helpful, because it becomes simply easier to ignore than to engage. For the second time in recent history, I'm going to assume that was not your intent, but using words like "sorry to have bothered you" come off as sarcasm as though there is no "sorry" and "my exceptional finding" as though you deserve high praise for reporting a finding that came from feeding pathological text into a library.

It is good to know this so that people can be aware of the issue in the future (though the fact that it has never come up before this report suggests there aren't many people trying to use it this way; nevertheless, it is suboptimal that it behaves this way with unanticipated text). It is a reasonable idea that in almost every possible circumstance it would be preferable to improve the performance of the shell program when such behavior is found.

I thought another perspective might have been helpful. There is no more or less to my comment.

16:20 Reply: How to insert duplicate rows? artifact: 46a1265b14 user: casaderobison

Assuming C or C++, it has no effect. Those backslashes are superfluous as given, as the languages mandate that adjacent string literals are merged to create a single longer string literal.

UNLESS that is a copy and paste from a macro definition, in which case the slashes would be important.

So ... as others have observed, too much missing context.

16:16 Reply: SQLite3.exe fails to handle huge multi-line-comment artifact: 8dd4bb93e5 user: casaderobison

To add on to what DRH said, the point of the shell is not to be a tool in its own right (though it is a valuable tool in its own right), but as a way to feed input into the SQLite library for processing. As such, the ability to unconditionally preprocess commands before they get to SQLite would diminish the ability to feed arbitrary input to the shell to pass on to SQLite for testing and processing. Some of that is unavoidable in as much as the shell already processes certain commands itself (the dot commands). But the rest is intended for SQLite to process and report outcomes.

I agree it would be nice to have the functionality you suggest if it were not at odds with the purpose of the project. The fact that it is error prone and difficult to get just right is of course another very good reason to not re-create code to strip comments.

01:19 Reply: SQLite3.exe fails to handle huge multi-line-comment artifact: 12f1bec3ed user: casaderobison

33% slower? What is the basis for that claim? I must be missing something beyond a pathological case.

2021-09-06
15:21 Reply: The characters to be escaped in Sqlite3 to prevent Sql Injection artifact: 49b9aa4d48 user: casaderobison

It is possible that someone is experimenting with Markov chains...

2021-09-04
23:25 Reply: The characters to be escaped in Sqlite3 to prevent Sql Injection artifact: 83f791e7a7 user: casaderobison

Excellently put.

13:10 Reply: A suggestion: temp0000, temp0001.. artifact: 761f3f6361 user: casaderobison

I'm sorry, I wasn't trying to be rude. I was really just trying to understand the impetus of the original post. Without more context, I thought I might be missing something.

12:04 Reply: A suggestion: temp0000, temp0001.. artifact: cb461ec111 user: casaderobison

These are perfectly reasonable solutions for certain classes of problems. They work until they don't. In any case, the issue as I interpreted the original post was a suggestion that SQLite use this technique as a general purpose temp file generation mechanism, and I think that would be a mistake.

Is there some context I'm missing? Did someone ask about creating temp files in an application specific context? That is very different than a general post in a SQLite support forum which made me think you were calling for a change in SQLite.

2021-09-03
21:31 Reply: A suggestion: temp0000, temp0001.. artifact: 7f7a9b418a user: casaderobison

I'm working on only one hour of sleep in the last 36 hours, so I probably shouldn't even be posting. But here I am...

Catalog? What is this catalog that works across all supported operating systems and is able to provide the next number as an O(1) operation?

I'm sure I'll feel stupid for asking the question later after I get some sleep, but I've written a few file system drivers in my time and even more programs that traverse directory structures, and I'm not aware of a generally applicable way that is always up to date to issue a query that will return the next in a sequence that is itself O(1). And without a catalog is completely dependent on the underlying file system storage hierarchy. Some file systems allow you to find a file (or identify its absence) in O(log N) time, but some file systems are just linked lists of directory entries. And if you are trying to find the next number in a sequence, you wind up making one of those log N calls for each file that exists before you have a candidate you can try, and hopefully you win the race to open it before the peer process attempts and succeeds the same thing.

2021-09-02
23:12 Reply: What happens if I define LONGDOUBLE_TYPE as double? artifact: 3528c4a846 user: casaderobison

All agreed. My point is that if there is not an IEEE requirement for an 80 bit / 10 byte type, one can't be held for not upholding a requirement that doesn't exist.

I mean, there are plenty of things to complain about when it comes to Microsoft, up to and including "I don't like they way they don't provide a separate long double type that has increased precision beyond double in their C family of compilers". My only point was the long double and their ancient historical use of "long double" for 80 bit extended precision values are not requirements, even if they would be advantageous.

19:33 Reply: What happens if I define LONGDOUBLE_TYPE as double? artifact: 2d661e8c8a user: casaderobison

Unless it has changed in recent years (as IEEE-754 evolves with time), 80 bit extended precision has never been an "official" IEEE standard. Yes, it is compatible with IEEE guidelines, but it isn't a blessed rigidly standardized format like 32 bit single precision floating point and 64 bit double precision floating point. And "long double" is only required to be as precise as "double" ...

It would certainly be nice if "long double" supported more precision than "double" but none of the standards require that to be true.

More ↓