I'd call it strong evidence that they aren't doing a full sort. To do a full sort of 320 MB of data in a process that is only using 10 MB of memory I would need more than 320 MB of reads + 240 MB of writes. There weren't enough writes to just copy the input (unless individual writes were bigger than reads, which I suppose is possible). It seems like there were way too many writes to maintain a small index, even if they did a write (or a few writes) when the top-10 contents change. I'd expect that every time you double the number of records you've already processed, you normally get between 5 and 10 new entries in the top-10. Log2(20e6) is about 24, so that would imply you can write every change to the top-10 index with just a few hundred writes. If instead of keeping the top-10, they kept the top page of the index (a few hundred entries), that might explain the number of writes I saw.