That is called "Cardinality", which in math terms is the size of a set, and in RDBMS terms is the size of the set of possible values in a column. i.e. A table with 1,000,000 rows with only 4 possible values in Column A is said to have a Cardinality of 4 for Column A. By contrast if Column B was declared Unique and indeed contained all values, its cardinality would be 1 million. Most SQL engines use exactly this cardinality figure to guess what would be best to do, use the index or not. The problem is they cannot magically "know" what the cardinality of data is, they have to really at some point look through each row in a table and for each column (or at least Indexed column) to calculate the cardinality. in SQLite this is done with: [ANALYZE table_name](https://sqlite.org/lang_analyze.html); Based on the info stored when ANALYZE is run (which, depending on the compile choices, may store much more than just cardinality), the query planner is well equipped to make such decisions as you suggest above, and is more likely to pick a good index. If you did not ANALYZE the table, but have your own personal ideas/beliefs about how well the content in a table conforms to the query filtering, you could hint this to the query planner using "[likelihood](https://sqlite.org/lang_corefunc.html#likelihood)()" - which allows you to say whether the result of a comparison is more likely, or less likely, to be TRUE. Simon is correct though, the ideas around this and the decision by the query planner to use or not to use a specific index, is non-trivial and after many years of hard work, it still can easily guess/assume wrong, so it really pays to provide hints or analyze tables for anything more than a simple lookup.