Index: www/arch.tcl ================================================================== --- www/arch.tcl +++ www/arch.tcl @@ -1,9 +1,9 @@ # # Run this Tcl script to generate the sqlite.html file. # -set rcsid {$Id: arch.tcl,v 1.15 2004/09/08 13:06:21 drh Exp $} +set rcsid {$Id: arch.tcl,v 1.16 2004/10/10 17:24:54 drh Exp $} source common.tcl header {Architecture of SQLite} puts {
The B-tree module requests information from the disk in fixed-size chunks. The default chunk size is 1024 bytes but can vary between 512 and 65536 bytes. -The page cache is reponsible for reading, writing, and +The page cache is responsible for reading, writing, and caching these chunks. The page cache also provides the rollback and atomic commit abstraction and takes care of locking of the database file. The B-tree driver requests particular pages from the page cache and notifies the page cache when it wants to modify pages or commit or rollback Index: www/capi3ref.tcl ================================================================== --- www/capi3ref.tcl +++ www/capi3ref.tcl @@ -1,6 +1,6 @@ -set rcsid {$Id: capi3ref.tcl,v 1.13 2004/09/30 13:43:14 drh Exp $} +set rcsid {$Id: capi3ref.tcl,v 1.14 2004/10/10 17:24:54 drh Exp $} source common.tcl header {C/C++ Interface For SQLite Version 3} puts {
Another reason for supporing multiple algorithms is that sometimes +
Another reason for supporting multiple algorithms is that sometimes it is useful to use an algorithm other than the default. Suppose, for example, you are inserting 1000 records into a database, all within a single transaction, but one of those records is malformed and causes a constraint error. Under PostgreSQL or Oracle, none of the 1000 records would get inserted. In MySQL, some subset of the records that appeared before the malformed record would be inserted -but the rest would not. Neither behavior is espeically helpful. +but the rest would not. Neither behavior is especially helpful. What you really want is to use the IGNORE algorithm to insert all but the malformed record.
} footer $rcsid Index: www/datatype3.tcl ================================================================== --- www/datatype3.tcl +++ www/datatype3.tcl @@ -1,6 +1,6 @@ -set rcsid {$Id: datatype3.tcl,v 1.7 2004/07/19 00:39:46 drh Exp $} +set rcsid {$Id: datatype3.tcl,v 1.8 2004/10/10 17:24:55 drh Exp $} source common.tcl header {Datatypes In SQLite Version 3} puts {A TEXT value is less than a BLOB value. When two TEXT values are compared, the C library function memcmp() is usually used to - determine the result. However this can be overriden, as described + determine the result. However this can be overridden, as described under 'User-defined collation Sequences' below.
When two BLOB values are compared, the result is always determined using memcmp().
Index: www/datatypes.tcl ================================================================== --- www/datatypes.tcl +++ www/datatypes.tcl @@ -1,9 +1,9 @@ # # Run this script to generated a datatypes.html output file # -set rcsid {$Id: datatypes.tcl,v 1.7 2004/05/31 15:06:30 drh Exp $} +set rcsid {$Id: datatypes.tcl,v 1.8 2004/10/10 17:24:55 drh Exp $} source common.tcl header {Datatypes In SQLite version 2} puts {For an expression, the datatype of the result is often determined by -the outermost operator. For example, arithmatic operators ("+", "*", "%") +the outermost operator. For example, arithmetic operators ("+", "*", "%") always return a numeric results. The string concatenation operator ("||") returns a text result. And so forth. If you are ever in doubt about the datatype of an expression you can use the special typeof() SQL function to determine what the datatype is. For example:
Index: www/download.tcl ================================================================== --- www/download.tcl +++ www/download.tcl @@ -1,9 +1,9 @@ # # Run this TCL script to generate HTML for the download.html file. # -set rcsid {$Id: download.tcl,v 1.14 2004/08/29 18:14:18 drh Exp $} +set rcsid {$Id: download.tcl,v 1.15 2004/10/10 17:24:55 drh Exp $} source common.tcl header {SQLite Download Page} puts {All SQLite source code is maintained in a CVS repository that is available for read-only access by anyone. You can interactively view the -respository contents and download individual files +repository contents and download individual files by visiting http://www.sqlite.org/cvstrac/dir?d=sqlite. -To access the respository directly, use the following +To access the repository directly, use the following commands:
cvs -d :pserver:anonymous@www.sqlite.org:/sqlite login Index: www/faq.tcl ================================================================== --- www/faq.tcl +++ www/faq.tcl @@ -1,9 +1,9 @@ # # Run this script to generated a faq.html output file # -set rcsid {$Id: faq.tcl,v 1.25 2004/09/18 18:00:24 drh Exp $} +set rcsid {$Id: faq.tcl,v 1.26 2004/10/10 17:24:55 drh Exp $} source common.tcl header {SQLite Frequently Asked Questions} set cnt 1 proc faq {question answer} { @@ -333,20 +333,20 @@ faq { What is the maximum size of a VARCHAR in SQLite? } {SQLite does not enforce datatype constraints. - A VARCHAR column can hold as much data as you care to put it in.
+ A VARCHAR column can hold as much data as you care to put in it. } faq { Does SQLite support a BLOB type? } {SQLite version 3.0 lets you puts BLOB data into any column, even columns that are declared to hold some other type.
-SQLite version 2.8 would hold store text data without embedded +
SQLite version 2.8 will store any text data without embedded '\000' characters. If you need to store BLOB data in SQLite version 2.8 you'll want to encode that data first. There is a source file named "src/encode.c" in the SQLite version 2.8 distribution that contains implementations of functions named "sqlite_encode_binary() @@ -400,11 +400,11 @@ as much temporary disk space as the original file while it is running.
} faq { - Can I use SQLite in my commerical product without paying royalties? + Can I use SQLite in my commercial product without paying royalties? } {Yes. SQLite is in the public domain. No claim of ownership is made to any part of the code. You can do anything you want with it.
} Index: www/fileformat.tcl ================================================================== --- www/fileformat.tcl +++ www/fileformat.tcl @@ -1,9 +1,9 @@ # # Run this script to generated a fileformat.html output file # -set rcsid {$Id: fileformat.tcl,v 1.12 2004/05/31 15:06:30 drh Exp $} +set rcsid {$Id: fileformat.tcl,v 1.13 2004/10/10 17:24:55 drh Exp $} source common.tcl header {SQLite Database File Format (Version 2)} puts {SQLite 2.X Database File Format
@@ -27,11 +27,11 @@The b-tree layer implemented by btree.c The pager layer implemented by pager.c -We wil describe each layer beginning with the bottom (pager) +We will describe each layer beginning with the bottom (pager) layer and working upwards.
2.0 The Pager Layer
@@ -639,11 +639,11 @@ statement. The offsets at the beginning of the record contain the byte index of the corresponding column value. Thus, Offset 0 contains the byte index for Value 0, Offset 1 contains the byte offset of Value 1, and so forth. The number of bytes in a column value can always be found by subtracting offsets. This allows NULLs to be -recovered from the record unabiguously. +recovered from the record unambiguously.Most columns are stored in the b-tree data as described above. The one exception is column that has type INTEGER PRIMARY KEY. @@ -676,11 +676,11 @@ sqliteRealToSortable() function in util.c of the SQLite sources for additional information on this encoding.) Numbers are also nul-terminated. Text values consists of the character 'c' followed by a copy of the text string and a nul-terminator. These encoding rules result in NULLs being sorted first, followed by numerical values in numerical -order, followed by text values in lexigraphical order. +order, followed by text values in lexicographical order.
4.4 SQL Schema Storage And Root B-Tree Page Numbers
Index: www/formatchng.tcl ================================================================== --- www/formatchng.tcl +++ www/formatchng.tcl @@ -1,9 +1,9 @@ # # Run this Tcl script to generate the formatchng.html file. # -set rcsid {$Id: formatchng.tcl,v 1.11 2004/06/16 03:02:04 drh Exp $ } +set rcsid {$Id: formatchng.tcl,v 1.12 2004/10/10 17:24:55 drh Exp $ } source common.tcl header {File Format Changes in SQLite} puts {
File Format Changes in SQLite
@@ -152,11 +152,11 @@2.7.6 to 2.8.0 2003-Feb-14 Version 2.8.0 introduces a change to the format of the rollback journal file. The main database file format is unchanged. Versions 2.7.6 and earlier can read and write 2.8.0 databases and vice versa. - Version 2.8.0 can rollback a transation that was started by version + Version 2.8.0 can rollback a transaction that was started by version 2.7.6 and earlier. But version 2.7.6 and earlier cannot rollback a transaction started by version 2.8.0 or later.
The only time this would ever be an issue is when you have a program using version 2.8.0 or later that crashes with an incomplete Index: www/lang.tcl ================================================================== --- www/lang.tcl +++ www/lang.tcl @@ -1,19 +1,19 @@ # # Run this Tcl script to generate the sqlite.html file. # -set rcsid {$Id: lang.tcl,v 1.73 2004/10/05 02:41:43 drh Exp $} +set rcsid {$Id: lang.tcl,v 1.74 2004/10/10 17:24:55 drh Exp $} source common.tcl header {Query Language Understood by SQLite} puts {
SQL As Understood By SQLite
The SQLite library understands most of the standard SQL language. But it does omit some features while at the same time adding a few features of its own. This document attempts to -describe percisely what parts of the SQL language SQLite does +describe precisely what parts of the SQL language SQLite does and does not support. A list of keywords is given at the end.
In all of the syntax diagrams that follow, literal text is shown in bold blue. Non-terminal symbols are shown in italic red. Operators @@ -200,11 +200,11 @@ Transactions can be started manually using the BEGIN command. Such transactions usually persist until the next COMMIT or ROLLBACK command. But a transaction will also ROLLBACK if the database is closed or if an error occurs and the ROLLBACK conflict resolution algorithm is specified. -See the documention on the ON CONFLICT +See the documentation on the ON CONFLICT clause for additional information about the ROLLBACK conflict resolution algorithm.
@@ -216,11 +216,11 @@ operation against a database creates a SHARED lock and the first write operation creates a RESERVED lock. Because the acquisition of locks is deferred until they are needed, it is possible that another thread or process could create a separate transaction and write to the database after the BEGIN on the current thread has executed. -If the transation is immediate, then RESERVED locks +If the transaction is immediate, then RESERVED locks are acquired on all databases as soon as the BEGIN command is executed, without waiting for the database to be used. After a BEGIN IMMEDIATE, you are guaranteed that no other thread or process will be able to write to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes can continue @@ -376,11 +376,11 @@ ON CONFLICT for additional information.
The exact text of each CREATE INDEX statement is stored in the sqlite_master or sqlite_temp_master table, depending on whether the table -being indexed is temporary. Everytime the database is opened, +being indexed is temporary. Every time the database is opened, all CREATE INDEX statements are read from the sqlite_master table and used to regenerate SQLite's internal representation of the index layout.
Indexes are removed with the DROP INDEX @@ -494,11 +494,11 @@ the result set of a query. The names of the table columns are the names of the columns in the result.
The exact text of each CREATE TABLE statement is stored in the sqlite_master -table. Everytime the database is opened, all CREATE TABLE statements +table. Every time the database is opened, all CREATE TABLE statements are read from the sqlite_master table and used to regenerate SQLite's internal representation of the table layout. If the original command was a CREATE TABLE AS then then an equivalent CREATE TABLE statement is synthesized and store in sqlite_master in place of the original command. @@ -869,11 +869,11 @@ = == != <> IN AND OR -
Supported unary operaters are these:
+Supported unary operators are these:
@@ -892,11 +892,11 @@ The operator [Operator %] outputs the remainder of its left operand modulo its right operand." puts { -- + ! ~The LIKE operator does a wildcard comparision. The operand +
The LIKE operator does a wildcard comparison. The operand to the right contains the wildcards.} puts "A percent symbol [Operator %] in the right operand matches any sequence of zero or more characters on the left. An underscore [Operator _] on the right matches any single character on the left." @@ -949,11 +949,11 @@
When a SELECT appears within an expression but is not the right operand of an IN operator, then the first row of the result of the SELECT becomes the value used in the expression. If the SELECT yields more than one result row, all rows after the first are ignored. If -the SELECT yeilds no rows, then the value of the SELECT is NULL.
+the SELECT yields no rows, then the value of the SELECT is NULL.Both simple and aggregate functions are supported. A simple function can be used in any expression. Simple functions return a result immediately based on their inputs. Aggregate functions may only be used in a SELECT statement. Aggregate functions compute @@ -1038,11 +1038,11 @@
@@ -1259,11 +1259,11 @@ If a NOT NULL constraint violation occurs, the NULL value is replaced by the default value for that column. If the column has no default value, then the ABORT algorithm is used. min(X,Y,...) Return the argument with the minimum value. Arguments -may be strings in addition to numbers. The mminimum value is determined +may be strings in addition to numbers. The minimum value is determined by the usual sort order. Note that min() is a simple function when it has 2 or more arguments but converts to an aggregate function if given only a single argument. When this conflict resolution strategy deletes rows in order to -statisfy a constraint, it does not invoke delete triggers on those +satisfy a constraint, it does not invoke delete triggers on those rows. But that may change in a future release.
The algorithm specified in the OR clause of a COPY, INSERT, or UPDATE overrides any algorithm specified in a CREATE TABLE or CREATE INDEX. If no algorithm is specified anywhere, the ABORT algorithm is used.
@@ -1313,11 +1313,11 @@ pragma to check the cache size permanently.
PRAGMA database_list;
For each open database, invoke the callback function once with information about that database. Arguments include the index and - the name the datbase was attached with. The first row will be for + the name the database was attached with. The first row will be for the main database. The second row will be for the database used to store temporary tables.
PRAGMA default_cache_size; @@ -1328,11 +1328,11 @@ This pragma works like the cache_size pragma with the additional feature that it changes the cache size persistently. With this pragma, you can set the cache size once and that setting is retained and reused - everytime you reopen the database.
PRAGMA default_synchronous;
PRAGMA default_synchronous = FULL; (2)
PRAGMA default_synchronous = NORMAL; (1)
@@ -1541,11 +1541,11 @@
puts "a table followed by [Operator .*] then the result is all columns"
puts {in that one table.
The DISTINCT keyword causes a subset of result rows to be returned, in which each result row is different. NULL values are not treated as -distinct from eachother. The default behavior is that all result rows +distinct from each other. The default behavior is that all result rows be returned, which can be made explicit with the keyword ALL.
The query is executed against one or more tables specified after
the FROM keyword. If multiple tables names are separated by commas,
then the query is against the cross join of the various tables.
@@ -1631,20 +1631,20 @@
Syntax {sql-statement} {
VACUUM [ The VACUUM command is an SQLite extension modelled after a similar
+ The VACUUM command is an SQLite extension modeled after a similar
command found in PostgreSQL. If VACUUM is invoked with the name of a
table or index then it is suppose to clean up the named table or index.
In version 1.0 of SQLite, the VACUUM command would invoke
gdbm_reorganize() to clean up the backend database file.
VACUUM became a no-op when the GDBM backend was removed from
SQLITE in version 2.0.0.
-VACUUM was reimplimented in version 2.8.1.
+VACUUM was reimplemented in version 2.8.1.
The index or table name argument is now ignored.
When an object (table, index, or trigger) is dropped from the
database, it leaves behind empty space. This makes the database
@@ -1684,11 +1684,11 @@
context, otherwise as an identifier.
SQLite was originally coded in such a way that the answer to
all questions in the chart below would be "Yes". But the
-expriments run on other SQL engines showed that none of them
+experiments run on other SQL engines showed that none of them
worked this way. So SQLite was modified to work the same as
Oracle, PostgreSQL, and DB2. This involved making NULLs
indistinct for the purposes of the SELECT DISTINCT statement and
for the UNION operator in a SELECT. NULLs are still distinct
in a UNIQUE column. This seems somewhat arbitrary, but the desire
Index: www/oldnews.tcl
==================================================================
--- www/oldnews.tcl
+++ www/oldnews.tcl
@@ -92,11 +92,11 @@
}
newsitem {2004-Jun-09} {Version 2.8.14 Released} {
SQLite version 2.8.14 is a patch release to the stable 2.8 series.
There is no reason to upgrade if 2.8.13 is working ok for you.
- This is only a bug-fix release. Most developement effort is
+ This is only a bug-fix release. Most development effort is
going into version 3.0.0 which is due out soon.
}
newsitem {2004-May-31} {CVS Access Temporarily Disabled} {
Anonymous access to the CVS repository will be suspended
@@ -114,6 +114,6 @@
The first beta is schedule for release on 2004-July-01.
Plans are to continue to support SQLite version 2.8 with
bug fixes. But all new development will occur in version 3.0.
}
-footer {$Id: oldnews.tcl,v 1.5 2004/09/18 18:00:24 drh Exp $}
+footer {$Id: oldnews.tcl,v 1.6 2004/10/10 17:24:55 drh Exp $}
Index: www/opcode.tcl
==================================================================
--- www/opcode.tcl
+++ www/opcode.tcl
@@ -1,9 +1,9 @@
#
# Run this Tcl script to generate the sqlite.html file.
#
-set rcsid {$Id: opcode.tcl,v 1.13 2004/05/31 15:06:30 drh Exp $}
+set rcsid {$Id: opcode.tcl,v 1.14 2004/10/10 17:24:55 drh Exp $}
source common.tcl
header {SQLite Virtual Machine Opcodes}
puts {
Each instruction in the virtual machine consists of an opcode and
up to three operands named P1, P2 and P3. P1 may be an arbitrary
integer. P2 must be a non-negative integer. P2 is always the
jump destination in any operation that might cause a jump.
Index: www/speed.tcl
==================================================================
--- www/speed.tcl
+++ www/speed.tcl
@@ -1,9 +1,9 @@
#
# Run this Tcl script to generate the speed.html file.
#
-set rcsid {$Id: speed.tcl,v 1.14 2004/05/31 15:06:30 drh Exp $ }
+set rcsid {$Id: speed.tcl,v 1.15 2004/10/10 17:24:55 drh Exp $ }
source common.tcl
header {SQLite Database Speed Comparison}
puts {
I am told that the default PostgreSQL configuration in RedHat 7.3
is unnecessarily conservative (it is designed to
work on a machine with 8MB of RAM) and that PostgreSQL could
-be made to run a lot faster with some knowledgable configuration
+be made to run a lot faster with some knowledgeable configuration
tuning.
Matt Sergeant reports that he has tuned his PostgreSQL installation
and rerun the tests shown below. His results show that
PostgreSQL and MySQL run at about the same speed. For Matt's
results, visit
@@ -246,11 +246,11 @@
This test still does 100 full table scans but it uses
-uses string comparisons instead of numerical comparisions.
+uses string comparisons instead of numerical comparisons.
SQLite is over three times faster than PostgreSQL here and about 30%
faster than MySQL.
Simplicity in a database engine can be either a strength or a
weakness, depending on what you are trying to do. In order to
achieve simplicity, SQLite has had to sacrifice other characteristics
-that some people find useful, such as high concurrancy, fine-grained
+that some people find useful, such as high concurrency, fine-grained
access control, a rich set of built-in functions, stored procedures,
esoteric SQL language features, XML and/or Java extensions,
tera- or peta-byte scalability, and so forth. If you need these
kinds of features and don't mind the added complexity that they
bring, then SQLite is probably not the database for you.
@@ -148,11 +148,11 @@
"keyword"
Interpreted as an identifier if it matches a known identifier
and occurs in a legal identifier context, otherwise as a string.
[keyword]
- Always interpreted as an identifer. (This notation is used
+ Always interpreted as an identifier. (This notation is used
by MS Access and SQL Server.)
Fallback Keywords
Index: www/nulls.tcl
==================================================================
--- www/nulls.tcl
+++ www/nulls.tcl
@@ -1,9 +1,9 @@
#
# Run this script to generated a nulls.html output file
#
-set rcsid {$Id: nulls.tcl,v 1.7 2004/08/30 14:58:12 drh Exp $}
+set rcsid {$Id: nulls.tcl,v 1.8 2004/10/10 17:24:55 drh Exp $}
source common.tcl
header {NULL Handling in SQLite}
puts {
NULL Handling in SQLite Versus Other Database Engines
@@ -28,11 +28,11 @@
SQLite Virtual Machine Opcodes
}
@@ -64,11 +64,11 @@
contained in comments in the source file. In fact, the opcode table
in this document
was generated by scanning the vdbe.c source file
and extracting the necessary information from comments. So the
source code comments are really the canonical source of information
-about the virtual macchine. When in doubt, refer to the source code.
+about the virtual machine. When in doubt, refer to the source code.
Database Speed Comparison
@@ -73,11 +73,11 @@
SQLite 2.7.6 (nosync): 3.372 Test 6: Creating an index
Index: www/whentouse.tcl
==================================================================
--- www/whentouse.tcl
+++ www/whentouse.tcl
@@ -1,9 +1,9 @@
#
# Run this TCL script to generate HTML for the goals.html file.
#
-set rcsid {$Id: whentouse.tcl,v 1.1 2004/01/27 15:58:38 drh Exp $}
+set rcsid {$Id: whentouse.tcl,v 1.2 2004/10/10 17:24:55 drh Exp $}
puts {
Appropriate Uses Of SQLite
@@ -36,11 +36,11 @@
Stand-in for an enterprise database during demos or testing
-If you are writting a client application for an enterprise database engine, +If you are writing a client application for an enterprise database engine, it makes sense to use a generic database backend that allows you to connect to many different kinds of SQL database engines. It makes even better sense to go ahead and include SQLite in the mix of supported database and to statically link the SQLite engine in with the client. That way the client program @@ -205,18 +205,18 @@ database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite -in situations where the same database will be accessed simultenously +in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
High-volume Websites
SQLite will normally work fine as the database backend to a website. -But if you website is so busy that your are thinking of splitted the +But if you website is so busy that your are thinking of splitting the database component off onto a separate machine, then you should definitely consider using an enterprise-class client/server database engine instead of SQLite.
High Concurrancy
+High Concurrency
SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. @@ -244,11 +244,11 @@ all other processes are prevented from reading any other part of the database. For many situations, this is not a problem. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some problems that require -more concurrancy, and those problems will need to seek a different +more concurrency, and those problems will need to seek a different solution.