> > I was thinking there could be some utility, but it would require care. > No, there can be no utility. I was not so sure of that, which is why I said, "a speed advantage might be obtained relative to what page caching does already." And your case against this possibility is not convincing. More on this below. > Index -vs- Index + Table access is inherently one-after-each (sequential). First you access the index in order to find out what table row you want to access, and only after that access to the index is complete do you access the table row itself. I agree as to the access time ordering (and understood it going in.) > This means that no advantage can be gained by separating the channel access to the index & data -- the only thing that can be eliminated is the latency incurred between access to the index and access to the data -- a problem more efficiently and cost effectively satisfied by judicious use of caching (which you would need anyway even if the index and data were located on separate channels). Here, I believe you have overlooked an advantage that a whole-index-in-memory scheme would have over "judicious use of caching" or even a well written caching implementation. The difference would be in latency. By explicitly declaring that a whole index should be in "fast" storage, instead of waiting for a caching scheme to get portions of that index into "fast" in a piecemeal fashion, the total latency hit may be reduced and it can also be moved in time to be prior to demand for the index. Both would be a win in many circumstances, assuming the memory/speed trade was a win. Mind you, I do not claim it would always be a win, but I think it easily could be for spinning iron oxide storage.