Another complication in all this is the interaction between database rows and database pages. Although it's natural to think of "updating rows", the reality (as I think I understand it) is that only whole _pages_ are updated. Obviously things will be highly dependent on row-size vs. page-size, but as an example, suppose you have a schema where roughly 10 rows fit in one page. Even if the addition of a complicated `WHERE` clause would prevent 90% of unnecessary "row updates", then – assuming the remaining 10% of "real" updates are reasonably evenly distributed – on average every page would contain one updated row and would therefore need to be written: you wouldn't save anything over a "dumb" update statement. I have a _feeling_ that I've seen mention that SQLite does/can optimise at the page level: if a page that has been "touched" by an `UPDATE` hasn't actually changed, then that page won't be written to disk. If that IS the case, it still wouldn't make any difference if the 10% of changed rows are evenly distributed as in the above example (because every one would have changed), but _would_ make a difference if the changed rows were more clustered: whether or not the complicated `WHERE` clause was present, only pages that have actually changed would be written.