There is no point in moving "small fields" from the "main" record object to a "secondary" record object if the entire record will fit as payload on one page (and it is not part of the data normalization process). The only time a difference comes in to play is if the "entire record" will not fit on a page and must consume an overflow page (particularly more than one overflow page) because then access to any field after the overflow requires retrieving the overflow page and finding the pointer therein to the next overflow page and finding the pointer therein to the next overflow page and so on and so forth until you have finally found the "column" you are looking for. Similarly, an "entire record" which cannot fit on a single page will require the I/O of the page on which the initial fragment is located plus all of the other pages comprising the overflow chain when that record is "re-written" (as in INSERT/REPLACE/UPDATE/DELETE). The reason that this is usually mentioned for BLOBs is that the very name suggests largness (Binary Large OBject). But it also applies to CLOBs (Character Large OBjects) -- otherwise called TEXT fields. It is just that the common connotation of TEXT is that it is of piddling size, not of "Large" size. If there were a MPREAL (MultiPrecision REAL) that could consume more than one page of space, then it would have the same considerations. In other words, consideration applies to the size of the record and not the type of the data. Putting 1 million REALs in a single row would have exactly the same considerations because that would require 8 MEGABYTES of page payload to be I/O'd on every INSERT/REPLACE/UPDATE/DELETE and multiple pages to be read via the overflow chain until the requested data in the record is located.