Pre-release Versions of System.Data.SQLite packages?
(1) By anonymous on 2020-09-03 13:37:56 [link] [source]
We are in the progress of converting a .Net application from Microsoft Access to SQLite. Some of the SQL features supported by Access were recently added in SQLite version 3.33.0. The current System.Data.SQLite 1.0.113.1 packages are compiled against SQLite 3.32.1. Can someone build and publish alpha versions of these packages that are compiled against SQLite 3.33.0?
(2) By Keith Medcalf (kmedcalf) on 2020-09-03 15:41:14 in reply to 1 [link] [source]
What feature is that?
(3) By anonymous on 2020-09-03 15:57:02 in reply to 2 [link] [source]
The new UPDATE ... FROM ...
query syntax.
(8) By Keith Medcalf (kmedcalf) on 2020-09-03 21:07:52 in reply to 3 [link] [source]
The UPDATE ... FROM ... query syntax that was added to SQLite3 in version 3.33.0 is not compatible with the Sybase UPDATE ... FROM ... syntax supported by Microsoft products.
https://sqlite.org/lang_update.html#update_from_in_other_sql_database_engines
You are aware that even though support for UPDATE ... FROM ... was added, you will only be able to use that capability from "native SQL" and that Microsoft products (like Access) will neither be able to parse (make pretty pictures from) nor compose (make pretty pictures into) SQL where UPDATE ... FROM ... is concerned?
(17) By anonymous on 2020-09-04 12:34:14 in reply to 8 [link] [source]
Yes, we are aware. The feature however allows us to translate UPDATE ... JOIN statements from Access to SQLite without having to perform individual row updates. This reduces the number of round trips to the database and improves performance. All of our SQL are in data layer objects which allow for database agnostic updates from the application's perspective. Once the SQLite implementation is complete we will replace the Access data layer with the SQLite one.
(4) By AAsk (aa2e72e) on 2020-09-03 16:32:57 in reply to 1 [link] [source]
I started researching with System.Data.SQLite and then stumbled on Microsoft.Data.SQLite.
I am pondering the following questions:
- Why not simply use Microsoft.Data.Odbc? This makes for a smaller footprint application.
- What do the SQlite packages offer that are beyond Microsoft.Data.Odbc?
Thoughts?
(5) By Ryan Smith (cuz) on 2020-09-03 19:50:11 in reply to 4 [link] [source]
This question is subjective.
Everything can be used - The choice really depends on the use-case and the list of reasons/caveats is too long to make a worthy post here.
I suggest you find a tutorial or specification for which functions are available via ODBC and then another for which are available through Microsoft.Data.SQLite and then System.Data.SQLite.
You might find that the things you wish to do are very well and fully available through any of these, in which case using the one with the smallest footprint makes sense.
If you need more, like in-depth DB manipulation, multiple-connection concurrency, control over the compile-options, or be able to include Virtual tables, or your own User-Defined-Functions (the list goes on and on) - it's imperative to use either the engine directly or wrappers that expose the broadest/newest host of functions.
Either way, you first have to know what you need, and then learn about the options, then you can decide which option will work for you.
(7) By AAsk (aa2e72e) on 2020-09-03 20:37:54 in reply to 5 [link] [source]
For any real world application, I would
- Not want to be more than two versions behind the current SQLite release.
- Want to have an environment for testing the next or work-in-progress version of SQLite.
I have no control over what version of SQLite is available in System.Data.SqLite or Microsoft.Data.Sqlite.
Being in control of what version to use can make a world of difference.
- An enhancement such as .read '| currently in the pipeline can save hours of work.
- So would an enhancement to ALTER TABLE that permits a column to be dropped.
Using the ODBC connection, I can choose which version of SQLite I use. However some useful features are unavailable:
a. SQLiteConfigDbOpsEnum is unavailable - I will need to construct/build this.
b. SetConfigurationOption is unavailable - I am not sure how I would implement something like
SetConfigurationOption(SQLiteConfigDbOpsEnum.SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION, true);
c. BindFunction is unavailable; this would be a very valuable feature as it enables any method in a DLL (including ones I write myself) to be bound and used inside SQL statements.
d. EnableExtensions is unavailable (for loading such like mmod_spatialite - but I have a workaround.
The file sqlite3.def simply lists the names of the exported functions. It does not provide the methods' signatures which must be known to be able to build SQLite.Interop.dll I believe.
The choice is not easy!
(9) By Tim Streater (Clothears) on 2020-09-03 21:09:30 in reply to 7 [link] [source]
But:
**An enhancement such as .read ' | currently in the pipeline can save hours of work.** |
---|
this has nothing to do with the SQLite library, but rather with the CLI application. You can download and run the most recent version of that at any time.
(11) By AAsk (aa2e72e) on 2020-09-03 21:24:10 in reply to 9 [link] [source]
I note your point (although I do not like several versions of SQLite co-existing in the same environment) but what I had in mind was using the CLI for exploring/fine-tuning development pathways or options.
Doesn't each release of SQLite (SQLIte,DLL) have its own version of the CLI (SQLite.EXE)?
(29) By Keith Medcalf (kmedcalf) on 2020-09-04 22:09:52 in reply to 11 [link] [source]
That depends on how you build it. Yes, their are monolithic versions of the DLL and EXE built by the SQLite3 team each time a new version of the SQLite3 code is released.
This said, however, the code is the product. It is entirely possible to build a CLI which is not dependent on the version of the SQLite3 engine in use and where the very same CLI (SQLite.exe) can be used with any particular version of the SQLite3 database core that you desire.
There is very little in the CLI that is dependent on the version of the SQLite3 database engine version.
(6) By Larry Brasfield (LarryBrasfield) on 2020-09-03 20:16:23 in reply to 4 [link] [source]
Re using {System,Microsoft}.Data.SQLite versus Microsoft.Data.Odbc:
That is like asking about using a power drill versus an extension cord.
Microsoft.Data.Odbc is an adapter layer over ODBC which itself is an adapter to a great many database engines, including SQLite.
System.Data.SQLite (or Microsoft.Data.SQLite) is a .Net adapter which implements the interfaces defined for System.Data so that they access the SQLite library.
SQLite has a number of capabilities not defined for System.Data, which you can see by examining the documentation. The additional functionality is reached via interfaces that are named differently and do not inherit from (or extend) ones found in System.Data . There are many such, so I decline to list them here.
On your 1st specific question, "Why not use ...Odbc?" I don't know. Maybe it will do what you need, and there is no reason not to use it. If you just need to do CRUD, for a simple application, it may be suitable for your purposes.
You might also wonder, "What can be done via ...Odbc and not with ...Data.SQLite? Operation between networked machines would be one. High concurrency would be another.
(10) By AAsk (aa2e72e) on 2020-09-03 21:15:17 in reply to 6 [link] [source]
- If High Concurrency were a priority requirement, I would not include SQLite as an option.
- As yet, I have done nothing with SQLite across networks but I intend to find out if I can use SQLite databases stored in the cloud. (backups and fail-over issues will be taken care of).
I do not think I understand this
That is like asking about using a power drill versus an extension cord.
(I can't see how power drills and extension cords are substitutes!)
CRUD is basically 90% of what I need; in that respect, the ODBC route delivers.
I can manage the remaining 10% using the CLI (I am still finding my feet regarding the cope of this).
(12) By Keith Medcalf (kmedcalf) on 2020-09-03 22:34:18 in reply to 10 [link] [source]
I do not know of any database system on the market (now or ever) that is guaranteed to function fully and correctly on a remote filesystem.
Unless there is something of which I am not aware, every DBMS only guarantees that it will operate fully and correctly correctly when operating against a "local" filesystem (note that the "filesystem" is local, but that does not mean that the "cord" between the "local computer on which the filesystem is running" and the "underlying block storage device" might not be 10's, 100's, 1000's or million of miles long).
Note that there is a difference between a "remote client" (as in client/server) and a remote "filesystem". They are not interchangeable no matter how much you might wish they were.
(13) By AAsk (aa2e72e) on 2020-09-03 22:45:51 in reply to 12 [link] [source]
Then, why does the cloud (Azure, AWS & the like) exist?
(14) By Warren Young (wyoung) on 2020-09-03 23:01:48 in reply to 13 [link] [source]
Keith's right: "database on remote filesystem" is not the same thing as "remote database". Stock SQLite via the interfaces discussed in this thread get you the first option, which is likely to cause problems.
If you want the latter, SQLite on a remote system with a remote DBMS API to manipulate it, you don't want any of the interfaces discussed in this thread so far. You want something like BedrockDB, rqlite, or dqlite.
(15) By Spaced Cowboy (SpacedCowboy) on 2020-09-03 23:19:25 in reply to 12 [link] [source]
Hey Keith,
Curious as to what your opinion would be on using something like iscsi/SAN/ScaleIO etc to make a network-available disk, since (as far as I know) the local disk semantics are maintained, even if the spinning rust is remote...
Is that what you're actually getting at above ?
I'm not sure what SQLite's performance would be like in that sort of circumstance, the page model would probably help a lot, but there'd be latency in fetching those pages before they're part of the cache. In a mainly-read environment (email database, perhaps) it'd probably work pretty well, assuming those "local disk" semantics are in fact maintained.
(16.1) By Warren Young (wyoung) on 2020-09-04 08:07:36 edited from 16.0 in reply to 15 [link] [source]
latency in fetching those pages before they're part of the cache
I don’t see how you can share a buffer cache across iSCSI or similar if you want ACID durability and consistency. How would cache invalidation work when another node writes to the DB? The whole idea of a buffer cache is that multiple processes — the kernel and a single user program if nothing else — can share information about the content of disk and memory pages because they're both running on the same hardware.
Atop that, you have general I/O latency. If your data is now 10 ms away, it’ll be like you have a pair of rotating rust head seeks on reads. In a single reader world, you get 100 TPS best case. Add the above problem and multiple readers, now you get fewer and fewer TPS.
No. Stop. Put the DBMS on the remote system and give it a remote access API, as in my prior response. Then you get the benefit from the local buffer cache for reads and local file system semantics.
(Writes on rotating rust are worse, by the way: SQLite needs 2 rotations per journaled write, so 7200 RPM means 60 TPS best case. The incentive is to put SQLite DBs onto SSDs so the I/O latency drops by a few orders of magnitude, raising write TPS. But, this is orthogonal to this subthread's topic, since fast SSD writes 10 ms away are still 10 ms away!)
Better, use the distributed DB tech I've linked to, which allow tuning so that reads are purely local, and only writes must hit the network. This sacrifices consistency, but in a safer manner than trying to do read caching over iSCSI, because it allows for eventual-consistency, which you're unlikely to get with remote FS cache invalidation.
(30) By Keith Medcalf (kmedcalf) on 2020-09-04 22:25:14 in reply to 15 [link] [source]
Remote block devices work perfectly fine provided that they are single-access. Yes, there may be increased "latency" to fetch the block if the cable is longer a fifty foot SCSI cable has 100 times the latency of a 1 foot SCSI cable and if changing the "cable" from copper to aluminium will also increase latency (signal propagation in aluminium is slower than copper) as will replacing the copper wires with glass fibres (signal propagation in glass fibre is significantly less than signal propagation in copper).
Multi-access remote block devices will have the same problems regarding multiple update consistency that affect remote filesystems -- they can be made to work perfectly but if designed to do so then they become so slow as to be unuseable.
The issue really boils down to the "distribution" of the responsibility to maintain consistency and how long it takes to do this properly. Clearly at some point the "time it takes" to maintain consistency amongst multiple accessors of the same resource has its effects. Keeping that time to a minimum by keeping all the multiple access local is the most efficient.
(18) By Larry Brasfield (LarryBrasfield) on 2020-09-04 14:12:18 in reply to 10 [link] [source]
That is like asking about using a power drill versus an extension cord. (I can't see how power drills and extension cords are substitutes!)
They are not substitutes for each other. Also, ODBC and SQLite are not substitutes, whether by themselves or wrapped in subclasses of System.Data. Were you to decide to use (whatever.)ODBC, you would not be able to do CRUD until you selected an actual database manager/engine, configured ODBC to use it, and set the "connection string" accordingly. Likewise, you cannot drill holes with just an extension cord; plugging a line-powered drill motor into it may get you there (with a drill bit.)
If you were to get one of the ODBC-SQLite adapters, you could use System.Data.Odbc to access a SQLite database, in the process losing some of SQLite's capability. As you suggest, that lost capability would be available via the SQLite CLI shell, provided the drawback of using a mish-mash of tools is accepted.
(19) By AAsk (aa2e72e) on 2020-09-04 14:24:17 in reply to 18 [link] [source]
If you were to get one of the ODBC-SQLite adapters, you could use System.Data.Odbc to access a SQLite database, in the process losing some of SQLite's capability.
I have not been able to identity any instance of lost capability - can you provide an example, please?
(20) By Larry Brasfield (LarryBrasfield) on 2020-09-04 15:47:55 in reply to 19 [link] [source]
Use of the Backup API would be one example.
(21) By Ryan Smith (cuz) on 2020-09-04 16:15:04 in reply to 19 [link] [source]
...instance of lost capability - can you provide an example, please?
- Add a User-defined function
- Load an external library for use in SQLite
- Open the DB in Read-Only mode
- Create a new DB file, among the normal ones, with custom page-Size, Character encoding and Auto-Vacuum mode
- Make a Backup of a DB file while it is open and in-use
- Create program Code, that works cross-platform on non-M$-Windoze environments
- Make a DB that is encrypted
- Control the SQLite version that is used
to name a few.
Note that most/all of these are what programmers needing a lot of control and customization and optimization powers would use, one could call it: "enthusiast" features.
Most of the normal day-to-day CRUD stuff will work just fine and just as well via ODBC as on a normal wrapper/direct connection, and service most people's needs.
You seem to keep trying to find what is better - and restating what Larry demonstrated with an analogy that has hopefully less opportunity for confusion - that's like asking if you should rather use a Bus or a Motorcycle for travel?
The answer is not obvious. It depends very much: They both travel well enough, but
- How fast do you need to get there?
- How many people do you need to move?
- Do you need it to work off-road?
- Is lane-width restricted?
etc. etc.
Simply put: ODBC will work fine for what it does if your needs are restricted to what it offers. With more "enthusiast needs", you need a more direct approach.
Hope that is more clear. :)
(22) By AAsk (aa2e72e) on 2020-09-04 16:27:08 in reply to 21 [source]
I can manage the following with an ODBC connection:
Load an external library for use in SQLite
Open the DB in Read-Only mode
Create a new DB file,
Moreover, I can
Prevent the creation of a new DB file
I am intrigued by Add a User-defined function How is this done in the CLI?
(23) By Larry Brasfield (LarryBrasfield) on 2020-09-04 16:44:36 in reply to 22 [link] [source]
See the .load meta-command and Runtime Loadable Extensions.
(24) By AAsk (aa2e72e) on 2020-09-04 17:16:12 in reply to 23 [link] [source]
I can load extension with an ODBC connection. I've only tried with mod_spatialite and do not have pre-compiled binaries for any other extension to investigate whether I can load several extensions.
When Ryan mentioned Add a User-defined function I wondered whether he was referring to the CLI or the BindFunction in System.Data.SQLite, which works thus:
BindFunction(new SQLiteFunctionAttribute("ceiling", 1, FunctionType.Scalar), (Func<object[], object>)((object[] args) => Math.Ceiling((double)((object[])args[1])[0])), null);
And I was puzzled for I have not seen any reference to this being possible in the CLI.
(27) By Larry Brasfield (LarryBrasfield) on 2020-09-04 19:38:53 in reply to 24 [link] [source]
[On user-defined function addition]
I was puzzled for I have not seen any reference to this being possible in the CLI.
The BindFunction() function in System.Data.SQLite relies on the same SQLite library feature as is used by the CLI .load meta-command, where the function set usable via SQL is extended, as is documented at Application-Defined SQL Functions.
If you had read the command-line shell doc, you would have found section 13 and avoided that particular puzzlement.
The documentation for SQLite is quite excellent. I suggest that, instead of using Google and sifting through much irrelevant stuff, you use the Permutated Title Index. You would learn a lot about the existence of capability just looking at those index entries.
FWIW, I second Ryan's sentiment that you would do better by studying the docs than by bringing every little question (already answered there) to this forum.
(25) By Ryan Smith (cuz) on 2020-09-04 17:36:05 in reply to 22 [link] [source]
Loading an external library depends on the SQLite-library-in-use and controlling interface allowing it - this may or may not be the case for the ODBC library in use.
I did not realize opening a DB in read-only mode is possible via ODBC, for me that is an API function and I do not use ODBC - so stand corrected on this.
Prevent the Creation of a New DB File? That's impossible via ODBC, and even if you could prevent your OS from creating a specific file, why would you ever want to do that? You may of course mean that you can give an option to fail when a file doesn't exist rather than creating a new one, which is fine but moot - so can the API.
CLI? Nobody is talking about the CLI, I'm comparing using ODBC vs. Wrappers like System.Data.SQLite vs. using the API directly - which most assuredly can do it. When speaking about the CLI specifically, that is whole other interface and cannot form part of your program (like ODBC or the Wrappers), it also cannot do (some of) the custom things I'm pointing out, but it comes with a host of dot commands that can do things the ODBC definitely cannot. (Read up on that - the list is too long to post).
While we are at it, and since you seem to be interested in-depth of what can and cannot be done, here are some more things you can definitely not do in ODBC:
- Create a Virtual-Table Mechanism module
- Create a VFS (virtual file-system) to use - though this shouldn't be needed
- Create a custom data Collation
- Change the configured Query limits
- Use an Authorizer to check/allow/disallow queries
- Get an SQLite process error-log
- Use a different, perhaps platform-specific-faster memory allocator
- Set a custom busy-handler to handle concurrent access in a preferred way
- Set a custom progress handler to show query progress/activity
- Add pre-update code hooks...
And more, but I'm stopping here - this is enough for now.
I'd like to add that these questions are really not for the forum. There are documentation for all this and tutorials and lots of research on the internet you should get into when deciding what you want to do. Us copying and pasting info from the documentation because you are finding asking easier than reading up is really not why this exists.
You are trying to use this like social media now, by asking for OPINIONs and then when we show that opinions don't count, only what you want to achieve, perhaps with minimal examples to make the point, you start arguing the very off-the-point validity of the examples and analogies, like the Drill-Chord or what you are able to do via ODBC - i.e. you start getting into documentation territory.
Please read the material available, do the research, if you read something in the documentation that you do not understand (and we are happy for that to include a wide variety of things), or if you run into a use-case or problem that is not documented, then please ask here for clarity - that's what this forum does exist for and, I might add, does really well.
PS: I'm pretty sure this post took me twice more time to make than the combined time all your questions took to type - to say nothing of the combined time of all the answers from all the other posters. We don't mind the effort, but would like the content to be relevant/helpful to everyone reading.
(26.1) By AAsk (aa2e72e) on 2020-09-04 19:26:33 edited from 26.0 in reply to 25 [link] [source]
I started researching SpatialLite at the end of July this year and quickly realised that I needed to learn SQLite.
My first recourse is always Google Search; useful as it is, Google Search identifies hundreds of references that are irrelevant in any search. My platform is Windows and have been bitten innumerable times by code snippets that simply do not apply to Windows.
There is ample SQLite documentation; there is very little that specifically simplifies platform considerations and very few (version specific) worked examples that a learner can readily reproduce.
I have used this forum as a means to speed up the learning process; as I see it, during the learning process, there are no stupid questions just stupid answers, sometimes.
I do not have (never have had) a single social media account - I cannot abide them.
CRUD is what I need and ODBC delivers. The point about my questions is simply to make sure that I am not oblivious of any vital considerations (known to other more experienced users of SQLite who might be happy to share their insight).
Us copying and pasting info .... quite surprised to surmise that the ethic in this forum is adversarial ...
(28) By Larry Brasfield (LarryBrasfield) on 2020-09-04 19:45:04 in reply to 26.1 [link] [source]
.... quite surprised to surmise that the ethic in this forum is adversarial ...
That is not the ethic or practice here. You are being advised to go to the docs first and here for clarification or features that are insufficiently discoverable. When such deficiencies become apparent here, the docs are often improved to remedy them. But when questions are readily answered by pointing to easily found topics in the docs, that begins to resemble an inefficient use of time, both yours and that of people here (such as Ryan) who are very willing to help.
(31) By Ryan Smith (cuz) on 2020-09-05 03:06:56 in reply to 26.1 [link] [source]
I started researching SpatialLite at the end of July this year and quickly realised that I needed to learn SQLite.
Agreed - Perfectly valid and good realisation.
My first recourse is always Google Search; useful as it is, Google Search identifies hundreds of references that are irrelevant in any search. My platform is Windows and have been bitten innumerable times by code snippets that simply do not apply to Windows. There is ample SQLite documentation; there is very little that specifically simplifies platform considerations and very few (version specific) worked examples that a learner can readily reproduce.
Well, that is because you are chasing a red herring. SQLite is platform-independent. It's an Engine. It has no Platform. Looking for Platform-related help/code snippets/info will get you nowhere indeed.
In stark contrast with SQLite - ODBC very much IS platform dependent and xxx.Data.SQLite is programming-platform dependent. You would get much more relevant and helpful information from fora for those platforms.
I have used this forum as a means to speed up the learning process;
Yes, we know, that is precisely the rub.
as I see it, during the learning process, there are no stupid questions just stupid answers, sometimes.
Stupid questions - no, irrelevant questions - yes.[1]
The other problem is that it can be hard to tell the "stupid" answers and "brilliant" answers apart for those without a basic grounding in the subject, sometimes.
I do not have (never have had) a single social media account - I cannot abide them.
That wins you some points in my book. :)
CRUD is what I need and ODBC delivers. The point about my questions is simply to make sure that I am not oblivious of any vital considerations (known to other more experienced users of SQLite who might be happy to share their insight).
Genuine non-biased Experienced-user happily shared insight: You say CRUD is all you need, you care about app-footprint/size, and you are a Windows user: I'd say if you foresee that to remain the case throughout the intended project - go ODBC 100%.
Good luck!
[1] In case the relevance reference isn't immediately clear: Simply read this post and the previous in reply again, and then try to marry that to the Topic of this thread. How is a poor future researcher to find relevant answers to a search when we do this? :)
(32) By Keith Medcalf (kmedcalf) on 2020-09-05 03:49:26 in reply to 1 [link] [source]
System.Data.SQLite is just a ADO.NET wrapper around the standard SQLite3 database engine core. If you like playing with C# or D-Flat then I suppose that you can just build the wrap-about yourself and replace the core (sqlite3.c/sqlite3.h) with whatever version you desire.
In so far as System.Data.SQLite is merely a wrapper I doubt that the version of the SQLite3 core is of much consequence except that it provide the minimum level of C API expected for the operation of the wrapper -- just as you can replace the "core" version of SQLite3 within the Python "wrapper" with any version you desire so long as the replacement provides the minimum C API required by the wrapper.
The basic functions of "parse an SQL statement", "bind parameters to a statement", "execute the statement and get the next row" and "retrieve the value of result column" are unaffected by changes/additions/deletions to the underlying SQL statement text.
Note that I have never actually used the System.Data.SQLite ADO wrapper nor replaced core database code to see what happens. I have "replaced" the SQLite core version in various other wrappers though such as pysqlite2 and apsw, which works as one would expect -- it is an uneventful change.