SQLite Forum

Most performant way to test for having 0, 1 or more rows in a table
Login
Hi. We have two mutually-exclusive *ON INSERT* triggers on a table,  
and a 3rd *ON DELETE* trigger.

The 1st has a `WHEN (select count(*) from t) = 1`,  
while the 2nd has a `WHEN (select count(*) from t) > 1`,  
and the 3rd a `WHEN (select count(*) from t) = 0`.

It looks simple and innocuous, but I'm worried about the performance.  
Because it seems to me, this introduces quadratic behavior on inserts  
and delete, from the `count(*)` FULL-SCANs. Is that correct?

I propose to use `(select count(*) from (select 1 from t limit 2))`   
instead of `(select count(*) from t)`, which I think will avoid the FULL-SCANs,  
but comparing the query-plans, I don't understand the 2nd plan, so I'm not 100%  
sure this is the better more performance approach, albeit more verbose.

```
sqlite> create table t (k integer primary key not null, data blob);
sqlite> select (select count(*) from (select 1 from t limit 2)) = 1;
QUERY PLAN
|--SCAN CONSTANT ROW
`--SCALAR SUBQUERY 2
   |--CO-ROUTINE 1
   |  `--SCAN TABLE t
   `--SCAN SUBQUERY 1
0
sqlite> select count(*) = 1 from t;
QUERY PLAN
`--SCAN TABLE t
0
```

In both plans, there's a `SCAN TABLE t`. Doesn't the `LIMIT 2` avoid the full-scan, as I expect?  
It's not obvious to me given the query plan output. I'm not versed in reading those I'm afraid...

If someone more knowledgeable could confirm my suspicious of the performance issue,  
and confirm my proposed changes fixes it, I'd appreciate. Thanks, --DD