SQLite Forum

How does SQLite compute estimated cost and rows for actual tables?
Login

How does SQLite compute estimated cost and rows for actual tables?

(1) By Bjoern Hoehrmann (bjoern) on 2023-03-03 02:00:24 [link] [source]

Let's say I have a virtual table that simply proxies an actual table. I would then want my xBestIndex implementation to provide estimatedCost and estimatedRows values that reasonably approximate what SQLite would use if it would query the actual table directly. What would be a suitable formula for these values, given what SQLite provides as information without too unusual compile-time options?

The guidance for virtual table implementations »The estimatedCost field should be set to the estimated number of disk access operations required to execute this query against the virtual table.« is not quite sufficient, considering that virtual tables end up competing with actual tables when the query planner tries to figure out a suitable join order (and being a bit off by a small factor might actually result in constantly worse query plans for the proxy table).

I figured I should be able to use sqlite_stat1 values for row estimates, and might be able to use dbstat information for cost estimates, but playing around with .wheretrace seemingly invalidates most naive formulas I could come up with. Like, looking at sqlite_stat1 I would guess 2 estimated rows, but then .wheretrace says 10. And when I equate »disk access operations« with »pages in btree« (or its depth or somesuch), the numbers also do not match in any obvious way. That is not unexpected, I would imagine there is some tweaking of the numbers involved to avoid pathological cases and such.

But what would a reasonable approximation formula look like, for the most common cases (say, table scan, search by rowid, covering index, non-covering, non-unique index)?

(2) By ddevienne on 2023-03-03 08:15:30 in reply to 1 [link] [source]

Here's an old email from Richard on the ML which might help.
Although I suspect you are already aware of those. Just in case.
Hopefully the fact it's 5 years old does not make it obsolete...

PS: I hope Richard will answer your post. As I already wrote,
vtable costing is a black art in SQLite, and could use more doc.

PPS: I can't find those messages by using the forum search.
I suspect the forum does not go that far back, which is a pity.
You'd need to search the old ML archive(s) to find things like this.
My 10 years old stash on GMail find results the Forum search can't.

On Thu, Mar 22, 2018 at 3:44 PM Richard Hipp <drh@sqlite.org> wrote:
Max,

Since you appear to be writing your own virtual tables, you could
probably benefit from becoming more familiar with the internal
workings of SQLite, and especially the ".wheretrace" and
".selecttrace" commands of the CLI.  To enable those commands, build a
new copy of the CLI that includes -DSQLITE_ENABLE_SELECTTRACE and
-DSQLITE_ENABLE_WHERETRACE.

The command ".selecttrace 0xffff" will print out the parse tree of
your SQL statement at various points as it is transformed by the query
optimizer.  By viewing this parse tree, you might be better able to
understand the transformations that are taking place, which might give
additional insights into what it going astray for you.

The latest pre-release snapshot for 3.23.0 contains a couple of new
optimizations related to LEFT JOIN.  Please also try your code with
the pre-release snapshot to see if it helps or hurts or makes no
difference.  And please report back what you find, regardless of your
findings.

The ".wheretrace 0xfff" command prints out the steps used by the query
planner as it scores various execution strategies looking for the
fastest way to run your query.  It should clearly show the costs
returned by your xBestIndex implementation, how decisions are made
based on those costs, and help you to see how changing those costs
might result in better plans.

The outputs from .selecttrace and .wheretrace are undocumented.  They
are subject to change.  And they do change in incompatible ways from
time to time.  You'll need to look at the source code to fully
understand what the outputs mean.

Note that the cost numbers printed by .wheretrace are logarithmic.
See the description of the LogEst numbers at
https://www.sqlite.org/src/artifact?ln=755-778&name=7e9deb145c110289
for additional information. There is a simple command-line program at
https://www.sqlite.org/src/artifact/11346aa019e2e77a that you can
compile and use to convert values between LogEst and traditional
base-10 decimal numbers.

There is older debugging documentation at
https://www.sqlite.org/debugging.html that might give additional
hints.

As you work through this problem, please provide feedback so that we
can improve the documentation for the next person who faces similar
issues.

On 3/22/18, Max Vlasov <max.vlasov@gmail.com> wrote:
> Hi,
>
> I noticed an unexpected optimization at the sqlite side.
> Currently I can not reproduce this with some arbitrary test data (probably
> I will eventually). Anyway the logic behind this (pseudo-code query)
>
> Select .... , (Select count(*) from LookUpTable where
> LookUpTable.Value=TableValue) as StatCount from
> (
>   ... Select TableValue, ... left join  ... left join
>   where <InnerCondition>
> )
>    where StatCount  = ..
>
> The aggregate lookup (Select count()) is relatively expensive to perform
> and involves a virtual table on my side (LookUpTable). So the goal of
> <InnerCondition> is also to narrow the output of the data for this lookup.
> Most of the time (including my synthetic tests) the filter indeed works the
> expected way (Filtering with <InnerCondition> then performing the aggregate
> only for the suitable), but for some of queries where there are several
> joins sqlite starts to perform the lookup before applying <InnerCondition>
> so I get my expensive calculations used for all rows of the inner joined
> table and then filtering with <InnerCondition>. I double checked this since
> the LookUpTable is my virtual table so I can set a breakpoint and inspect
> the passed value. Ironically, when I remove the outer condition ( where
> StatCount ..  ) from the query in question, it starts to work much faster.
>
> I suspect this might be related to how I respond to the constraint cost
> requests from sqlite. For this virtual table the possible results might be
> 1 or a very big value. I see that the value 1 is indeed visited for this
> query and probably sqlite might assume some absolute minimum cost for this
> look-up. But when I change it to a bigger value (still lower than "a very
> big value" also used), the query plan will not change.
>
> Here are summary of Explain Query Plan (rea is my virtual table, there are
> 4 joins in this query, sqlite 3.21.0).
>
> The Query with Outer condition "where StatCount  = .."
> SCAN TABLE
> SEARCH TABLE (2 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 1
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
> SEARCH TABLE (2 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 2
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
>
> The same query when I just removed the outer "where StatCount  = .."
> SCAN TABLE...
> SEARCH TABLE (4 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 1
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
>
>
> Can I manually affect the plan for this query or probably by further
> tweaking the virtual table costs?
>
> Thanks

--
D. Richard Hipp
drh@sqlite.org
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

(3) By Stephan Beal (stephan) on 2023-03-03 08:31:38 in reply to 2 [source]

PPS: I can't find those messages by using the forum search. I suspect the forum does not go that far back

Correct. There was a hard cut from the ML to the forum when it went online.

which is a pity.

Patches/scripts for importing from mail archives would be gleefully received (and we could also use them in the fossil forum), but they would need to be able to strip out the ever-increasing spam which triggered the migration to the forum and (for bonus points), where applicable, map users to their current accounts.