/ Check-in [b442525b]
Login
SQLite training in Houston TX on 2019-11-05 (details)
Part of the 2019 Tcl Conference

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Change the cost estimator in the query planner to take into account the logN rowid lookup cost when going from an index to a table.
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | stat2-enhancement
Files: files | file ages | folders
SHA1: b442525b0ba642bb8d57b87b7b9e373b6046454a
User & Date: drh 2011-01-24 15:11:23
Context
2011-01-24
17:46
Restructuring and generalizing analyze5.test. The whole script is currently disabled and will need to be reenabled prior to merging with trunk. check-in: 31fcc706 user: drh tags: stat2-enhancement
15:11
Change the cost estimator in the query planner to take into account the logN rowid lookup cost when going from an index to a table. check-in: b442525b user: drh tags: stat2-enhancement
2011-01-22
00:10
Add the ability to use indices for constraints of the form "x IS NOT NULL" when sqlite_stat2 is available and most entries for column x are NULL. check-in: 5d5bddd2 user: drh tags: stat2-enhancement
Changes
Hide Diffs Unified Diffs Ignore Whitespace Patch

Changes to src/where.c.

2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933

2934

2935
2936
2937



2938
2939
2940
2941
2942



2943
2944
2945


2946

2947
2948
2949
2950
2951
2952
2953
        whereEqualScanEst(pParse, pProbe, pFirstTerm->pExpr->pRight, &nRow);
      }else if( pFirstTerm->eOperator==WO_IN && bInEst==0 ){
        whereInScanEst(pParse, pProbe, pFirstTerm->pExpr->x.pList, &nRow);
      }
    }
#endif /* SQLITE_ENABLE_STAT2 */

    /* Assume constant cost to access a row and logarithmic cost to
    ** do a binary search.  Hence, the initial cost is the number of output
    ** rows plus log2(table-size) times the number of binary searches.
    */
    cost = nRow + nInMul*estLog(aiRowEst[0]);

    /* Adjust the number of rows and the cost downward to reflect rows
    ** that are excluded by range constraints.
    */
    nRow = (nRow * (double)estBound) / (double)100;
    cost = (cost * (double)estBound) / (double)100;


    /* Add in the estimated cost of sorting the result

    */
    if( bSort ){
      cost += cost*estLog(cost);



    }

    /* If all information can be taken directly from the index, we avoid
    ** doing table lookups.  This reduces the cost by half.  (Not really -
    ** this needs to be fixed.)



    */
    if( pIdx && bLookup==0 ){
      cost /= (double)2;


    }

    /**** Cost of using this index has now been computed ****/

    /* If there are additional constraints on this table that cannot
    ** be used with the current index, but which might lower the number
    ** of output rows, adjust the nRow value accordingly.  This only 
    ** matters if the current index is the least costly, so do not bother
    ** with this step if we already know this index will not be chosen.







<
<
<
<
<
<




<

>
|
>

<
|
>
>
>


<
<
<
>
>
>

<
<
>
>

>







2915
2916
2917
2918
2919
2920
2921






2922
2923
2924
2925

2926
2927
2928
2929
2930

2931
2932
2933
2934
2935
2936



2937
2938
2939
2940


2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
        whereEqualScanEst(pParse, pProbe, pFirstTerm->pExpr->pRight, &nRow);
      }else if( pFirstTerm->eOperator==WO_IN && bInEst==0 ){
        whereInScanEst(pParse, pProbe, pFirstTerm->pExpr->x.pList, &nRow);
      }
    }
#endif /* SQLITE_ENABLE_STAT2 */







    /* Adjust the number of rows and the cost downward to reflect rows
    ** that are excluded by range constraints.
    */
    nRow = (nRow * (double)estBound) / (double)100;


    /* Assume constant cost to access a row and logarithmic cost to
    ** do a binary search.  Hence, the initial cost is the number of output
    ** rows plus log2(table-size) times the number of binary searches.
    */

    if( pIdx && bLookup ){
      cost = nRow + (nInMul+nRow)*estLog(aiRowEst[0]);
    }else{
      cost = nRow + nInMul*estLog(aiRowEst[0]);
    }




    /* Add in the estimated cost of sorting the result.  This cost is expanded
    ** by a fudge factor of 3.0 to account for the fact that a sorting step 
    ** involves a write and is thus more expensive than a lookup step.
    */


    if( bSort ){
      cost += nRow*estLog(nRow)*(double)3;
    }

    /**** Cost of using this index has now been computed ****/

    /* If there are additional constraints on this table that cannot
    ** be used with the current index, but which might lower the number
    ** of output rows, adjust the nRow value accordingly.  This only 
    ** matters if the current index is the least costly, so do not bother
    ** with this step if we already know this index will not be chosen.