SQLite Forum

Timeline
Login

50 most recent forum posts

2021-06-18
21:22 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 53e0481de3 user: bellini)

Yeah, exactly that.

But I can actually so some "advanced stuff" like that. For example, I can make that multiplication that you described like this:

  T.objects.annotate(
    foo=(Value(Decimal("1.0")) * F("a")) / F("b"),
  )

This translates exactly to your suggestion, but based on the fact that django abstracts it I would not expect to need to cast anything to decimal since my columns are already decimal (which I know understand maps to NUMERIC and not REAL). If I was using postgresql/mysql/etc (which I use in production, postgresql), I would not need that workaround.

Note that I use DecimalField (https://docs.djangoproject.com/en/3.2/ref/models/fields/#decimalfield) instead of FloatField on django because it uses Decimal on python and decimal types on postgresql, making sure I don't run into floating point rounding issues.

Also, the documentation on django seems wrong since they say that DecimalField uses REAL for sqlite (https://docs.djangoproject.com/en/3.2/ref/databases/#sqlite-decimal-handling). So, or this doc is wrong or the implementation is wrong, something that I'll also write now on the issue I opened there.

20:19 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 19569c70bb user: casaderobison)

I think the issue is that Django abstracts away the database access completely so that it is not a concern of the programmer, so you don't really have the option of telling it what you want it to do with this or that database engine.

19:42 Reply: Looking for Disk IO performance suggestions (artifact: d4bb4d13fb user: anonymous)

If your record size is quite smaller than the page size then may be check if you can group those inserts into transactions. Potentially saving on partial page waste.

Also note that what the WAL journaling mode saves during the insertion process is paid later during the checkpointing process.

19:28 Post: Looking for Disk IO performance suggestions (artifact: 14231e60b5 user: tg2298)
Hi 

I am new to SQLite3. Our products are running on an embedded system that needs to write the database 7*24h. So the disk I/O throughput(write) is critical for our eMMC devices' lifetime. By monitoring with iotop, We found the system i/o write to the disk is much higher than the actual data file size(20-40 times higher depends on different journal mode). And the "WAL" mode has the smallest I/O write throughput among all kinds of journal modes. I was thought it should be "OFF" mode. Also, it is weird when using journal mode = "off" or "memory", seems the sqlite still trying to write the xxx.db-journal file. 
see following strace result:

......
15:40:41 open("/home/jie/Desktop/AXM-WEB2/sqlite3_test/testDB.db-journal", O_RDWR|O_CREAT|O_CLOEXEC, 0644) = 4
15:40:41 fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
15:40:41 geteuid()                      = 1000
15:40:41 fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
15:40:41 lseek(4, 0, SEEK_SET)          = 0
15:40:41 write(4, "\0\0\0\0\0\0\0\0\0\0\0\0\212\315\325\23\0\0\0\36\0\0\2\0\0\0\4\0\0\0\0\0"..., 512) = 512
15:40:41 lseek(4, 512, SEEK_SET)        = 512
15:40:41 write(4, "\0\0\0\10", 4)       = 4
15:40:41 lseek(4, 516, SEEK_SET)        = 516
15:40:41 write(4, "\n\0\0\0\t\3\224\0\3\364\3\350\3\334\3\320\3\304\3\270\3\254\3\240\3\224\0\0\0\0\0\0"..., 1024) = 1024
15:40:41 lseek(4, 1540, SEEK_SET)       = 1540
15:40:41 write(4, "\212\315\325\26", 4) = 4
15:40:41 lseek(4, 1544, SEEK_SET)       = 1544
15:40:41 write(4, "\0\0\0\36", 4)       = 4
15:40:41 lseek(4, 1548, SEEK_SET)       = 1548
15:40:41 write(4, "\r\0\0\0\1\3\37\0\3\37\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024
15:40:41 lseek(4, 2572, SEEK_SET)       = 2572
15:40:41 write(4, "\212\315\325\32", 4) = 4
......

I wonder is there any other operations or any suggestions for saving some I/O write flow?

Thanks for your help.
18:02 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: c383472562 user: cuz)

Seriously?

Django won't let you state a query like this:

SELECT (a * 1.0) / b FROM t;

?

If so, invest your time in another framework asap.

17:50 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 36975d9b5e user: bellini)

Hey Ryan,

Thanks for the reply!

Yeah, I notice that some casts or even some multiplications helped on sql directly, but I can't even do some of those castings on django itself. I ended up opening an issue there so they can maybe workaround this issue.

The issue here: https://code.djangoproject.com/ticket/32861#ticket

17:09 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: a0db2ca60a user: cuz)

Hi Thiago, there's been good and accurate replies to this post, showing what is correct and why.

However, on the off chance you were just interested in a solution, consider the following sql where both table t's columns are INTeger (not even Decimal or Numeric) and see what works and do not work when forcing the calculation types:


CREATE TABLE t(a INT, b INT);

INSERT INTO t VALUES
 (10,1)
,(10,2)
,(10,3)
,(10,4)
,(10,5)
,(10,6)
;


SELECT a / b FROM t;   -- Integer division only

  -- a / b
  -- -----
  --   10 
  --   5  
  --   3  
  --   2  
  --   2  
  --   1  


SELECT CAST(a / b AS REAL) FROM t;  -- Not useful

  -- CAST(a / b AS REAL)   
  -- ---------
  --    10.0  
  --    5.0   
  --    3.0   
  --    2.0   
  --    2.0   
  --    1.0   


SELECT (CAST(a AS REAL) / b) FROM t;  -- Casting one or both values

  -- (CAST(a AS REAL) / b)
  -- --------------------
  --                 10.0
  --                  5.0
  --     3.33333333333333
  --                  2.5
  --                  2.0
  --     1.66666666666667


SELECT ((a * 1.0) / b) FROM t;    -- Using another calculation to force float

  --      ((a * 1.0) / b)
  -- --------------------
  --                 10.0
  --                  5.0
  --     3.33333333333333
  --                  2.5
  --                  2.0
  --     1.66666666666667


SELECT (a / (b * 1.0)) FROM t;    -- etc.

  --      (a / (b * 1.0))
  -- --------------------
  --                 10.0
  --                  5.0
  --     3.33333333333333
  --                  2.5
  --                  2.0
  --     1.66666666666667


Perhaps an adjusted query is all you need, if other fixes are off the table / slow.

16:46 Reply: SQLITE_OS_WIN definition location (artifact: 45c09120da user: larrybr)

I agree that this ordering is a little odd. As it only affects debug builds and a release is imminent, don't expect to see this fixed soon. I believe the Makefile.msc (for Windows targets) sets this explicitly, so that has helped disguise this issue for a long time.

16:43 Reply: website: Wapp Application Error (artifact: e48a8e4fb4 user: kyle)

Looks good to me...

15:17 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 389ad35386 user: bellini)

Yeah, I get that this would be a breaking API change now...

Thank you both (drh and larrybr) for helping me figure out this.

I'll open an issue on Django regarding this, probably their Decimal field should be translating to numeric then.

15:11 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 76d6fc2424 user: larrybr)

Hi, Thiago.

I think the idea behind that set of rules is to map the numerous typenames used in various SQL dialects to something reasonable. Of course, as your post shows, thinking on what that should mean varies.

Due to backward compatibility considerations, the present mapping is highly unlikely to change. So today's problem is how to adapt your generated SQL to SQLite's dynamic typing scheme.

While you do that, you will probably want to consider the decimal extension. I expect you will need some SQL transformation specialized for SQLite, and that extension may provide a useful part of such a solution.

Good luck.

15:07 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 3763455998 user: drh)

You can perhaps argue that DECIMAL should translate into floating-point instead of numeric. But the time to have made that argument was 2004. That's now all water under the bridge. To convert DECIMAL to be floating point now, 17 years later, would be a breaking API change that could potential cause problems countless legacy applications.

15:02 Edit reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 342f88cbac user: larrybr)

Your expectations need some adjustment to incorporate the fact that there is no "decimal" type in SQLite. I suggest a study of Affinity of expressions, at least. Better would be the whole page.

(Later addition in light of Richard's reply:)

When you write "decimal" (or "DECIMAL" or other casings thereof) as the column type, it resolves to NUMERIC per the 5th rule in section 2 of the doc we linked. There is no decimal type; I suspect Richard meant to say that values are stored into a NUMERIC column using an integer representation when they can be.

15:00 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: beced93ad1 user: bellini)

Hi Larry.

Yes, I know that there's no decimal type on SQLite and that some types are actually aliases to others.

But looking at the link you gave me I'm surprised to know that DECIMAL is stored as NUMERIC instead of REAL. Is there any motivations in that? I would expect decimal to be something like real/float/double/etc.

Note that the only reason my column is decimal and not numeric is because it was generated by Django. In my django app I use sqlite for dev and postgresql for production, and I declare monetary fields as DecimalField, which translates to decimals here in sqlite. There's no RealField ASAIK and using FloatField (which would translate to REAL) is not an option because I would loose precision on postgresql.

14:51 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 8a1ecfb31e user: drh)

And as you can see on my schema, they are indeed decimals and not integers.

See https://www.sqlite.org/datatype3.html#affinity_name_examples. SQLite stores integers in a DECIMAL column when it can. To force floating-point values, you want to use REAL.

Also be mindful of the limitations of floating point numbers. Depending on your requirements, you might want to store values as in integer number of cents, then convert to dollars (or whatever currency you are using) in the application code.

14:50 Reply: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: a809acc180 user: larrybr)

Your expectations need some adjustment to incorporate the fact that there is no "decimal" type in SQLite. I suggest a study of Affinity of expressions, at least. Better would be the whole page.

14:38 Reply: group_concat with distinct and custom seperator (artifact: b40eb1ac50 user: elgiad007)

Really not sure how efficient this is, but you could use the REPLACE function:

select replace(group_concat(DISTINCT column1), ',', ' ') from example_table
14:37 Post: Division by decimals wrongly returning a rounded int result (works fine when casting to real) (artifact: 60050b3f5e user: bellini)

Hey guys,

I just had this issue where a division by 2 decimals is rounding some numbers if they don't have a floating point in the actual value. Let me demonstrate the issue:

Here is the schema for my table:

sqlite> .schema product_product
CREATE TABLE IF NOT EXISTS "product_product" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "created_at" datetime NOT NULL, "updated_at" datetime NOT NULL, "name" varchar(255) NOT NULL, "price" decimal NOT NULL, "quantity" decimal NOT NULL, "depth" decimal NULL, "height" decimal NULL, "weight" decimal NULL, "width" decimal NULL, "freight_help" decimal NOT NULL, "ref" char(32) NOT NULL UNIQUE, "erp_id" varchar(255) NULL UNIQUE, "erp_synced_at" datetime NULL, "unit" varchar(2) NOT NULL, "is_active" bool NOT NULL, "kind" varchar(9) NOT NULL, "original_price" decimal NULL, "coffee_kind" varchar(7) NULL, "producer_id" integer NULL REFERENCES "product_producer" ("id") DEFERRABLE INITIALLY DEFERRED, "region_id" integer NULL REFERENCES "product_region" ("id") DEFERRABLE INITIALLY DEFERRED, "variety_id" integer NULL REFERENCES "product_variety" ("id") DEFERRABLE INITIALLY DEFERRED, "altitude" integer NULL, "characteristics" varchar(255) NULL, "score" integer NULL, "olfactory" varchar(255) NULL, "taste" varchar(255) NULL, "visual" varchar(255) NULL, "is_fresh" bool NOT NULL, "is_featured" bool NOT NULL, "description" text NOT NULL, "sca_report" varchar(2000) NULL, "stock" integer NULL, "slug" varchar(50) NOT NULL);
CREATE INDEX "product_product_is_active_79c22e04" ON "product_product" ("is_active");
CREATE INDEX "product_product_producer_id_48defcb2" ON "product_product" ("producer_id");
CREATE INDEX "product_product_region_id_8af442d6" ON "product_product" ("region_id");
CREATE INDEX "product_product_variety_id_899ca5d0" ON "product_product" ("variety_id");
CREATE INDEX "product_product_is_fresh_8e90cb7e" ON "product_product" ("is_fresh");
CREATE INDEX "product_product_is_featured_27b90897" ON "product_product" ("is_featured");
CREATE INDEX "product_product_slug_76cde0ae" ON "product_product" ("slug");

I want to select the price, original price and price / original_price (both columns of decimal type). Here is what I get:

sqlite> SELECT price, original_price, price / coalesce(original_price, price) FROM product_product;
18.9||1.0
24.9||1.0
24.9||1.0
39.9|50|0.798
20||1
29.9||1.0
34.7||1.0
20||1
20|30|0
15|34|0

Notice that even though 39.9 / 50 gave me 0.798, 20 / 30 and 15 / 34 gave me 0. And as you can see on my schema, they are indeed decimals and not integers.

I did a test by forcing price to be casted to numeric (I tried decimal too):

sqlite> SELECT price, original_price, cast(price as numeric) / coalesce(original_price, price) FROM product_product;
18.9||1.0
24.9||1.0
24.9||1.0
39.9|50|0.798
20||1
29.9||1.0
34.7||1.0
20||1
20|30|0
15|34|0

Same result. The only way I got the real result from this was when I casted price to real:

sqlite> SELECT price, original_price, cast(price as real) / coalesce(original_price, price) FROM product_product;
18.9||1.0
24.9||1.0
24.9||1.0
39.9|50|0.798
20||1.0
29.9||1.0
34.7||1.0
20||1.0
20|30|0.666666666666667
15|34|0.441176470588235

AFAIK, even the result without cast should be working and giving the right result, right?

Btw, I'm using sqlite3 from the current debian unstable build:

❯ sqlite3 --version
3.34.1 2021-01-20 14:10:07 10e20c0b43500cfb9bbc0eaa061c57514f715d87238f4d835880cd846b9ealt1
13:50 Reply: website: Wapp Application Error (artifact: 3fa7fe2bd8 user: drh)

I think the problem is now fixed in Wapp. Please try again.

08:41 Reply: Unhandled exception on server application (artifact: a5c9e94bb9 user: kmedcalf)

That would seem to indicate that each time on the merry-go-round you are consuming some resource and forgetting to release it.

08:35 Reply: Unhandled exception on server application (artifact: 52a0154d68 user: kmedcalf)

So you are saying that the particular line you are attributing this to (the sqlite3_exec) "works for a while" and then it does not?

07:58 Reply: Unhandled exception on server application (artifact: 8f0eb6ba08 user: steevis)
There is no certain time between the server starting and crash.
I run the server and client applications at the same time. The server works nice for a while. And after some time the exception appears.
07:28 Post: www3.sqlite.org out of sync (artifact: 69dc4a288b user: mingodad)
It seems that www3.sqlite.org out of sync again, see https://www3.sqlite.org/src/timeline?n=100&y=ci
06:50 Post: SQLITE_OS_WIN definition location (artifact: 6e2aec2755 user: markstz)

In the amalgamation file sqlite3.c, when compiled on Windows, SQLITE_OS_WIN is defined on line 16426 (in 3.35.0). But it's referenced earlier on line 13896 and again on 13912.

#if defined(SQLITE_FORCE_OS_TRACE) || defined(SQLITE_TEST) || \
    (defined(SQLITE_DEBUG) && SQLITE_OS_WIN)
  extern int sqlite3OSTrace;
# define OSTRACE(X)          if( sqlite3OSTrace ) sqlite3DebugPrintf X
# define SQLITE_HAVE_OS_TRACE
#else
# define OSTRACE(X)
# undef  SQLITE_HAVE_OS_TRACE
#endif

On Windows, this should enable OSTRACE if SQLITE_DEBUG is defined as a compiler option, but it only works if SQLITE_OS_WIN is explicitly defined too.

I thought SQLITE_OS_WIN is not supposed to be defined as a compiler option, as sqlite3.c does it on Windows if SQLITE_OS_OTHER is not defined.

Is there an error here?

06:40 Reply: Unhandled exception on server application (artifact: cf3a41fba5 user: kmedcalf)

What do you mean "a few hours" afterwards?

06:22 Reply: How to ReadUncommitted for c# .net framework 4.5 (artifact: 8a3150c082 user: gunter_hick)
That is NOT what I suggested, and it obviously does not change what is going on.

To be more explicit: Be sure to COMMIT the CREATE TABLE statement on the first connection BEFORE attempting to open the second connection.
05:53 Reply: can we use a for/while loop inside sqlite triggers ? (artifact: fb04c3b2fd user: kmedcalf)

Yes, exactly.

Note that the select ts, sum(size_t) over (order by ts desc from mqtt6 will output the ts and sum so far obtained in descending order by ts (and ascending sum). This is a subquery with a limit 1 and a condition on the sum(size_t) so the summing will run until it hits the first row (going backwards in time) where the size_t sum exceeds 1024000 and will return the ts for that row so the delete can delete all the rows less or equal to this ts.

The index on (ts, size_t) serves two purposes. (1) it can be used by the sum ... over process so that only the index is scanned without need to access or process the underlying table. (2) to find all the rows to be deleted without scanning the underlying table (thus rows can be inserted out of order if one wants and it will make no difference to the running time).

You can add an order by to the sum ... over query for the greater certainty, but at the moment it is not required (I suppose this is an implementation detail), that is:

create trigger less_den_ten_mb_delete after insert on mqtt6
begin
  delete from mqtt6
        where ts <= (
                     select ts 
                       from (
                                select ts, 
                                       sum(size_t) over (order by ts desc) as size
                                  from mqtt6
                              order by ts desc
                             ) 
                      where size >= 1024000 
                      limit 1
                    );
end;
05:05 Reply: Unhandled exception on server application (artifact: 19540f02aa user: steevis)

Virtualization is KVM.

04:07 Reply: Unhandled exception on server application (artifact: a646717063 user: steevis)

VDS is virtual dedicated server.

03:47 Reply: can we use a for/while loop inside sqlite triggers ? (artifact: f4185905dc user: kroggen)

Beautiful solution!

Maybe we can remove the WHEN clause from the trigger to make it do a single scan on the table instead of 2

I suppose that the size_t column is on the index to make the engine use only the index b-tree on the SUM OVER sub-query, not needing to read the table b-tree. Only the DELETE would touch the original table

03:26 Reply: Unhandled exception on server application (artifact: 00d78885f9 user: kmedcalf)

What is a VDS? Venereal Disease Symptoms? Depending on the particular symptoms, penicillin may be curative.

You have provided insufficient information for meaningful diagnosis.

Note that many "virtualization emulators" do not actually provide the functional facilities required for SQLite3 to function correctly. Perhaps your VDS is defective as designed ...

03:21 Post: website: Wapp Application Error (artifact: 8fb639b5ad user: kyle)

I accidentally typed a trailing \ in the search bar on sqlite.org, and it resulted in a Wapp Application Error:

https://www.sqlite.org/search?s=d&q=x%5C

03:07 Reply: Unhandled exception on server application (artifact: 24f969bd25 user: steevis)
I've installed MinGW on my remote VDS server and attached a debugger to the server process. The server crashed and GDB shown no error.
The most interesting that the unhandled exception was generated by a Visual C++ runtime library. But the server was written in MinGW.
Maybe the error occurs in the windows library like msvcrt.dll.
There was no error when I used a simple text file as a users db instead using SQLite.
I think somehow SQLite conflicts with Windows Server 2012 libraries on my VDS server.
Maybe it's all about a virtual machine on the VDS server?
02:58 Reply: can we use a for/while loop inside sqlite triggers ? (artifact: 92767311f0 user: kmedcalf)
SQLite version 3.36.0 2021-06-15 11:18:07
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite>
sqlite> create table mqtt6
   ...> (
   ...>     ts integer not null,
   ...>     size_t integer not null
   ...> );
sqlite> create index mqtt6_ts_size_t on mqtt6 (ts, size_t);
sqlite> insert into mqtt6 select value, RandomV(1024) + 1 from generate_series where start == 1 and stop == 1000000;
sqlite> create trigger less_den_ten_mb_delete after insert on mqtt6 when (select sum(size_t) from mqtt6) > 1024000
   ...> begin
   ...>   delete from mqtt6
   ...>         where ts <= (
   ...>                        select ts
   ...>                          from (
   ...>                                select ts,
   ...>                                       sum(size_t) over (order by ts desc) as size
   ...>                                  from mqtt6
   ...>                               )
   ...>                         where size >= 1024000
   ...>                         limit 1
   ...>                     );
   ...> end;
sqlite> .timer on
sqlite> .eqp on
sqlite>
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 1000000  │ 512366354   │ 1       │ 1000000 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.147 user 0.140625 sys 0.000000
sqlite> insert into mqtt6 values (1000001, RandomV(1024)+1);
Run Time: real 0.504 user 0.500000 sys 0.015625
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2016     │ 1023824     │ 997986  │ 1000001 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.004 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000002, RandomV(1024)+1);
Run Time: real 0.002 user 0.000000 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2016     │ 1023316     │ 997987  │ 1000002 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.006 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000003, RandomV(1024)+1);
Run Time: real 0.002 user 0.000000 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2015     │ 1023479     │ 997989  │ 1000003 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.003 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000004, RandomV(1024)+1);
Run Time: real 0.000 user 0.000000 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2016     │ 1023841     │ 997989  │ 1000004 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.003 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000005, RandomV(1024)+1);
Run Time: real 0.000 user 0.000000 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2017     │ 1023939     │ 997989  │ 1000005 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.006 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000006, RandomV(1024)+1);
Run Time: real 0.002 user 0.015625 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2017     │ 1023846     │ 997990  │ 1000006 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.003 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000007, RandomV(1024)+1);
Run Time: real 0.002 user 0.000000 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2017     │ 1023853     │ 997991  │ 1000007 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.003 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000008, RandomV(1024)+1);
Run Time: real 0.001 user 0.015625 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2016     │ 1023964     │ 997993  │ 1000008 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.006 user 0.000000 sys 0.000000
sqlite> insert into mqtt6 values (1000009, RandomV(1024)+1);
Run Time: real 0.002 user 0.015625 sys 0.000000
sqlite> select count(*), sum(size_t), min(ts), max(ts) from mqtt6;
QUERY PLAN
`--SCAN mqtt6 (~1048576 rows)
┌──────────┬─────────────┬─────────┬─────────┐
│ count(*) │ sum(size_t) │ min(ts) │ max(ts) │
├──────────┼─────────────┼─────────┼─────────┤
│ 2016     │ 1023726     │ 997994  │ 1000009 │
└──────────┴─────────────┴─────────┴─────────┘
Run Time: real 0.003 user 0.000000 sys 0.000000
sqlite>
02:28 Edit reply: can we use a for/while loop inside sqlite triggers ? (artifact: 77bf509367 user: kmedcalf)

How about something like:

create trigger less_den_ten_mb_delete after insert on mqtt6 when (select sum(size_t) from mqtt6) > 1024000
begin
  delete from mqtt6
        where ts <= (
                       select ts 
                         from (
                               select ts, 
                                      sum(size_t) over (order by ts desc) as size
                                 from mqtt6
                              ) 
                        where size >= 1024000 
                        limit 1
                    );
end;

and get rid of the delete trigger. You can probably do away with the when clause since if size >= 1024000 is never met then the delete will do nothing and assuming that you have an covering index on (ts, size_t) it will be very fast.

02:22 Reply: can we use a for/while loop inside sqlite triggers ? (artifact: 98bdfc922b user: kmedcalf)

How about something like:

create trigger less_den_ten_mb_delete after insert on mqtt6 when (select sum(size_t) from mqtt6) > 1024000
begin
  delete from mqtt6
        where ts <= (
                       select ts 
                         from (
                               select ts, 
                                      sum(size_t) over (order by ts desc) as size
                                 from mqtt6
                              ) 
                        where size >= 1024000 
                     order by size 
                        limit 1
                    );
end;

and get rid of the delete trigger.

2021-06-17
21:17 Edit reply: can we use a for/while loop inside sqlite triggers ? (artifact: fa49f1dc35 user: kroggen)

Hi Simon,

I was thinking that this could be implemented, but I don't know how.

I imagine creating a dynamic column containing the sum of the "size_t" starting from the older row (reverse order)

But how to create this dynamic column that aggregates data from other rows? Using a virtual table?

I am really curious about this

-- EDIT --

OK, it appears that using window functions (SUM OVER) could help, as described here:

https://stackoverflow.com/a/58339386/4626775

21:01 Reply: "Hello, world" CGI with althttpd (artifact: 664897d353 user: anonymous)

Dear All,

First: thanks for sqlite and althttpd. I'm in a similar boat, wanted to run #!/bin/bash => echo "Hello $RANDOM World" for proof of life and managed to get things configured via xinetd and serving static content.

Firstly, I had to learn a bit more about xinetd config and its relationship with /etc/services (and port numbers, etc). A helpful command for that was: /usr/sbin/xinetd -d -dontfork, along with systemctl start/stop xinetd The tutorial/documentation for althttpd does not mention /etc/services, and xinetd complains that ERROR: http is supposed to be on port 80 instead of 8088 (upon further reading: this is likely 100% on me, as :8080 would exist in /etc/services)

Secondly, when althttpd was running and failing to execute my *.cgi, httpd.log was not getting populated with any error messages, and from what I could tell there was no useful output/error messages coming to the browser. eg:

Without --jail

rames@red-slash:~/Public$ md5sum http.log 
1440a68f9dca1ec0eb2441ffa256f126  http.log
rames@red-slash:~/Public$ curl --verbose http://localhost:8088/bar
*   Trying 127.0.0.1:8088...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8088 (#0)
> GET /bar HTTP/1.1
> Host: localhost:8088
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Connection: keep-alive
< Date: Thu, 17 Jun 2021 20:37:39 GMT
< Content-length: 0
< 
* Connection #0 to host localhost left intact
rames@red-slash:~/Public$ md5sum http.log 
1440a68f9dca1ec0eb2441ffa256f126  http.log

With --jail 0

rames@red-slash:~/Public$ md5sum http.log 
1440a68f9dca1ec0eb2441ffa256f126  http.log
rames@red-slash:~/Public$ curl --verbose http://localhost:8088/bar
*   Trying 127.0.0.1:8088...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8088 (#0)
> GET /bar HTTP/1.1
> Host: localhost:8088
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Connection: keep-alive
< Date: Thu, 17 Jun 2021 20:38:42 GMT
< Content-type: text/html
< Content-length: 151
< 
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Hello World</title>
</head>
<body>
Hello 2541
</body>
</html>
* Connection #0 to host localhost left intact
rames@red-slash:~/Public$ md5sum http.log 
8135df716f6026da36971d1febcf0ffe  http.log

...you can see that in one case, a request was made, but there was no output and no entry in http.log, whereas in the "working" case, an entry was added to http.log.

If I hadn't stumbled upon this discussion (via google query: "althttpd" not executing bash), I really don't think I would have been able to figure out where my problem was.

althttpd --help is not recognized, and searching through althttpd.md for cgi, exec, and jail is quite unclear that there exists this subtle interaction that "you can only use static binaries and scripts of any kind are practically impossible". Anything with a shebang is likely to refer to something outside of the www-root, and on most linux systems, any typical shebang target will likely be dynamically (not statically) compiled.

I'll keep forging ahead with it, I'm hopeful that this will help me cobble something together for my home network/scripts (eg: ~/bin/turn-on-the-lights.sh) without a bunch of overhead managing "stuff". The simplicity of althttpd is appealing if I can get it working right.

Would doc-centric patches be welcome? eg: update Security Features => --jail && static binaries, what would a Troubleshooting section look like?

20:58 Reply: can we use a for/while loop inside sqlite triggers ? (artifact: f494347f80 user: kroggen)

Hi Simon,

I was thinking that this could be implemented, but I don't know how.

I imagine creating a dynamic column containing the sum of the "size_t" starting from the older row (reverse order)

But how to create this dynamic column that aggregates data from other rows? Using a virtual table?

I am really curious about this

20:30 Reply: Using WAL mode with multiple processes (artifact: c46e24c7c7 user: anonymous)

Hi

I tried the test where first connection creates table (in transaction) and second connection read. The test failed showing second connection failed to read as transaction is not complete.

I tested with following connection string var connectionString = $"Data Source =test.db;Version=3; Cache = shared; read_uncommitted = true; journal_mode=WAL";

https://sqlite.org/forum/forumpost/99ffc78cee

Regards

Anand

19:02 Reply: Unhandled exception on server application (artifact: 83a8cf32f2 user: wyoung)

compiling the SQLite code with that compiler option might help resolve it.

Are you certain that works through a C function pointer? I can see how GCC might unwind a regular C function call while processing a thrown C++ exception, but as loosey-goosey as C treats pointers, I wouldn't bet on it happening in this case.

Is writing portable code even a feasible requirement for most software projects?

I understand your point, but once you go wandering down into UB land, you can't expect a SAR party organized on the SQLite forum to do much more than call out, "Please return to the well-lit path!"

Anyway, wxWidgets on a 9-year-old OS that's been out of mainstream support for 3 years now? And using the obsolete VDS API atop that, without any explanation of what that might have to do with the reported problem? The OP shows many signs of doing questionable things, so it's fair to question whether he's taken care of UB-ish things.

18:18 Reply: Unhandled exception on server application (artifact: 96d771edd3 user: anonymous)

While I am not familiar with OP’s compiler options I presume something equivalent to -fexceptions might be supported. If it’s a exception propagation issue through C code, compiling the SQLite code with that compiler option might help resolve it.

I did not see OP mention about writing portable code. Is writing portable code even a feasible requirement for most software projects?

16:43 Reply: Disable File IO (artifact: 479473cb93 user: anonymous)

Thanks for the reply. I don't really might fixing it.

16:38 Reply: Unhandled exception on server application (artifact: 46aca22249 user: stephan)

gcc 6.3 manual says -fexceptions allows "C code to interoperate with exception handlers written in C++".

That's a compiler-specific extension. Portable code never relies on such things.

16:20 Reply: Disable File IO (artifact: 5ed543308a user: drh)

The -DSQLITE_OMIT_DISKIO option has not been maintained and no longer appears to be working. However, you can probably fix it up and get it to work again.

If you don't want to do that yourself and you have budget, you can perhaps engage the SQLite developers to do necessary fixes for you.

15:55 Reply: Unhandled exception on server application (artifact: bc8c5c40e6 user: anonymous)

ever allow a C++ exception to propagate

gcc 6.3 manual says -fexceptions allows "C code to interoperate with exception handlers written in C++". I suspect it allows an exception from C++ to pass through C code, and be caught in C++ code. Or am I misinterpreting ‘interoperate’ here?

15:43 Post: Disable File IO (artifact: 0995a39c1f user: anonymous)

Hi, Is there a way to disable file IO completely and only use the in-memory capabilities when building for a platform that does not support it?

14:49 Reply: How to ReadUncommitted for c# .net framework 4.5 (artifact: 5827d75bc6 user: anonymous)
Adding table is not issue.Insert don't have issue. Case is failing on read
  var messageActual = (string)queryCommand.ExecuteScalar();

Here is updated test case

        public void Test_Read_Uncommited()
        {
            var connectionString = $"Data Source ={_dbFile}; Cache = Shared; read_uncommitted = true;";
            using (var firstConnection = new SQLiteConnection(connectionString))
            {
                try
                {
                    firstConnection.Open();                            
                    using (var firstTransaction = firstConnection.BeginTransaction())
                    {
                        string message = "Hello World!";
                        using (var createTable = firstConnection.CreateCommand())
                        {
                            createTable.CommandText = "CREATE TABLE Message(Text TEXT) ";
                            createTable.ExecuteNonQuery();
                        }
                        using (var insertRow = firstConnection.CreateCommand())
                        {
                            insertRow.CommandText = $"INSERT INTO Message ( Text ) VALUES ( '{message}' ); ";
                            insertRow.ExecuteNonQuery();
                        }
                        using (var secondConnection = new SQLiteConnection(connectionString))
                        {
                            try
                            {
                                secondConnection.Open();
                                //using (var secondTransaction = secondConnection.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted))
                                {
                                    var queryCommand = secondConnection.CreateCommand();
                                    queryCommand.CommandText = "SELECT Text FROM Message;";
                                    var messageActual = (string)queryCommand.ExecuteScalar();
                                    Assert.IsTrue(string.CompareOrdinal(message, messageActual) == 0);
                                }
                            }
                            finally
                            {
                                secondConnection.Close();
                            }
                            firstTransaction.Rollback();
                        }
                    }    
                }
                finally
                {
                    firstConnection.Close();
                }         
            }
        }
14:29 Reply: How to ReadUncommitted for c# .net framework 4.5 (artifact: fe2090433d user: gunter_hick)
Try creating the table in a separate transaction. I have no idea what the wrapper you are using may be doing transactionwise if you have multiple statements in one command.
14:03 Reply: Unhandled exception on server application (artifact: bb005e039a user: anonymous)

It might be a good idea to attach a debugger to the server process and let it crash. If you've got crash dumps, load those in a debugger. Make sure to build your program with debugging symbols. Once you do that, you'll know where exactly the crash happens and might even get an idea on the reasons for the crash. It could be a heap corruption. DrMemory sometimes helps in debugging those on Windows.

More ↓