This is a very curious observation, but I thought I would post it here hoping that I can learn something from this. The final query (the actual SELECT) in the code above is: <code> SELECT O, OtoD || D as Path, D, ShortestDistance FROM ShortestPath INNER JOIN OD ON ShortestPath.O = OD.origin AND ShortestPath.D = OD.destination ORDER BY ShortestDistance Limit 1; </code> I realized that the first join condition is not needed because all the paths start from the specified origin anyways. So, I got rid of that, and made my final select as below: <code> SELECT O, OtoD || D as Path, D, ShortestDistance FROM ShortestPath INNER JOIN OD ON ShortestPath.D = OD.destination ORDER BY ShortestDistance Limit 1; </code> Surprise, surprise! The query takes 3 times as long to complete after this change. I am at a loss to find any reason why removing a useless, redundant join condition would cause a query to take longer, and that too significantly longer. I increased the limit in the recursive part fo 500000 for the tests, and instead of taking about 9 seconds to run, the query takes 25 seconds to run without the useless join condition. I ran the tests a dozen tests to make sure I was not seeing the effect of some system lag on my computer or something like that. If somebody can provide me a plausible reason why asking SQLite to do extra, useless stuff enables it to accomplish a task 3 times faster, I am all ears! Thank you.