/ Check-in [799d31d9]
Login
SQLite training in Houston TX on 2019-11-05 (details)
Part of the 2019 Tcl Conference

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Move RowHashBlock.nUsed to RowHash.nUsed. Fix a typo in a comment in test_async.c. (CVS 6533)
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1: 799d31d99fd18a6f99862433384e37d6747ee5b3
User & Date: danielk1977 2009-04-21 18:20:45
Context
2009-04-22
00:47
Extend the Rowset object to contain all the capabilities of Rowhash in addition to its legacy capabilities. Use Rowset to replace Rowhash. In addition to requiring less code, This removes the 2^32 result row limitation, uses less memory, and gives better bounds on worst-case performance. The Rowhash implementation has yet to be removed. (CVS 6534) check-in: b101cf70 user: drh tags: trunk
2009-04-21
18:20
Move RowHashBlock.nUsed to RowHash.nUsed. Fix a typo in a comment in test_async.c. (CVS 6533) check-in: 799d31d9 user: danielk1977 tags: trunk
17:23
Fix a segfault that followed a malloc failure introduced by (6527). (CVS 6532) check-in: 08e71b11 user: danielk1977 tags: trunk
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Changes to src/rowhash.c.

    27     27   ** The insert batch number is a parameter to the TEST primitive.  The
    28     28   ** hash table is rebuilt whenever the batch number increases.  TEST
    29     29   ** operations only look for INSERTs that occurred in prior batches.
    30     30   **
    31     31   ** The caller is responsible for insuring that there are no duplicate
    32     32   ** INSERTs.
    33     33   **
    34         -** $Id: rowhash.c,v 1.3 2009/04/21 16:15:15 drh Exp $
           34  +** $Id: rowhash.c,v 1.4 2009/04/21 18:20:45 danielk1977 Exp $
    35     35   */
    36     36   #include "sqliteInt.h"
    37     37   
    38     38   /*
    39     39   ** An upper bound on the size of heap allocations made by this module.
    40     40   ** Limiting the size of allocations helps to avoid memory fragmentation.
    41     41   */
................................................................................
   121    121   ** The linked list of RowHashBlock objects also provides a way to sequentially
   122    122   ** scan all elements in the RowHash.  This sequential scan is used when
   123    123   ** rebuilding the hash table.  The hash table is rebuilt after every 
   124    124   ** batch of inserts.
   125    125   */
   126    126   struct RowHashBlock {
   127    127     struct RowHashBlockData {
   128         -    int nUsed;                /* Num of aElem[] currently used in this block */
   129    128       RowHashBlock *pNext;      /* Next RowHashBlock object in list of them all */
   130    129     } data;
   131    130     RowHashElem aElem[ROWHASH_ELEM_PER_BLOCK]; /* Available RowHashElem objects */
   132    131   };
   133    132   
   134    133   /*
   135    134   ** RowHash structure. References to a structure of this type are passed
   136    135   ** around and used as opaque handles by code in other modules.
   137    136   */
   138    137   struct RowHash {
          138  +  int nUsed;              /* Number of used entries in first RowHashBlock */
   139    139     int nEntry;             /* Number of used entries over all RowHashBlocks */
   140    140     int iBatch;             /* The current insert batch number */
   141    141     u8 nHeight;             /* Height of tree of hash pages */
   142    142     u8 nLinearLimit;        /* Linear search limit (used if pHash==0) */
   143    143     int nBucket;            /* Number of buckets in hash table */
   144    144     RowHashPage *pHash;     /* Pointer to root of hash table tree */
   145    145     RowHashBlock *pBlock;   /* Linked list of RowHashBlocks */
................................................................................
   266    266     /* Allocate the hash-table. */
   267    267     if( allocHashTable(&p->pHash, p->nHeight, &nLeaf) ){
   268    268       return SQLITE_NOMEM;
   269    269     }
   270    270   
   271    271     /* Insert all values into the hash-table. */
   272    272     for(pBlock=p->pBlock; pBlock; pBlock=pBlock->data.pNext){
   273         -    RowHashElem * const pEnd = &pBlock->aElem[pBlock->data.nUsed];
          273  +    RowHashElem * const pEnd = &pBlock->aElem[
          274  +      pBlock==p->pBlock?p->nUsed:ROWHASH_ELEM_PER_BLOCK
          275  +    ];
   274    276       RowHashElem *pIter;
   275    277       for(pIter=pBlock->aElem; pIter<pEnd; pIter++){
   276    278         RowHashElem **ppElem = findHashBucket(p, pIter->iVal);
   277    279         pIter->pNext = *ppElem;
   278    280         *ppElem = pIter;
   279    281       }
   280    282     }
................................................................................
   350    352       }
   351    353       p->db = db;
   352    354       *pp = p;
   353    355     }
   354    356   
   355    357     /* If the current RowHashBlock is full, or if the first RowHashBlock has
   356    358     ** not yet been allocated, allocate one now. */ 
   357         -  if( !p->pBlock || p->pBlock->data.nUsed==ROWHASH_ELEM_PER_BLOCK ){
          359  +  if( !p->pBlock || p->nUsed==ROWHASH_ELEM_PER_BLOCK ){
   358    360       RowHashBlock *pBlock = (RowHashBlock*)sqlite3Malloc(sizeof(RowHashBlock));
   359    361       if( !pBlock ){
   360    362         return SQLITE_NOMEM;
   361    363       }
   362         -    pBlock->data.nUsed = 0;
   363    364       pBlock->data.pNext = p->pBlock;
   364    365       p->pBlock = pBlock;
          366  +    p->nUsed = 0;
   365    367     }
          368  +  assert( p->nUsed==(p->nEntry % ROWHASH_ELEM_PER_BLOCK) );
   366    369   
   367    370     /* Add iVal to the current RowHashBlock. */
   368         -  p->pBlock->aElem[p->pBlock->data.nUsed].iVal = iVal;
   369         -  p->pBlock->data.nUsed++;
          371  +  p->pBlock->aElem[p->nUsed].iVal = iVal;
          372  +  p->nUsed++;
   370    373     p->nEntry++;
   371    374     return SQLITE_OK;
   372    375   }
   373    376   
   374    377   /*
   375    378   ** Destroy the RowHash object passed as the first argument.
   376    379   */

Changes to src/test_async.c.

     6      6   **
     7      7   **    May you do good and not evil.
     8      8   **    May you find forgiveness for yourself and forgive others.
     9      9   **    May you share freely, never taking more than you give.
    10     10   **
    11     11   *************************************************************************
    12     12   **
    13         -** $Id: test_async.c,v 1.57 2009/04/07 11:21:29 danielk1977 Exp $
           13  +** $Id: test_async.c,v 1.58 2009/04/21 18:20:45 danielk1977 Exp $
    14     14   **
    15     15   ** This file contains an example implementation of an asynchronous IO 
    16     16   ** backend for SQLite.
    17     17   **
    18     18   ** WHAT IS ASYNCHRONOUS I/O?
    19     19   **
    20     20   ** With asynchronous I/O, write requests are handled by a separate thread
................................................................................
    69     69   ** Multiple connections from within a single process that use this
    70     70   ** implementation of asynchronous IO may access a single database
    71     71   ** file concurrently. From the point of view of the user, if all
    72     72   ** connections are from within a single process, there is no difference
    73     73   ** between the concurrency offered by "normal" SQLite and SQLite
    74     74   ** using the asynchronous backend.
    75     75   **
    76         -** If connections from within multiple database files may access the
           76  +** If connections from within multiple processes may access the
    77     77   ** database file, the ENABLE_FILE_LOCKING symbol (see below) must be
    78     78   ** defined. If it is not defined, then no locks are established on 
    79     79   ** the database file. In this case, if multiple processes access 
    80     80   ** the database file, corruption will quickly result.
    81     81   **
    82     82   ** If ENABLE_FILE_LOCKING is defined (the default), then connections 
    83     83   ** from within multiple processes may access a single database file