SQLite Forum

Timeline
Login

50 forum posts by user RichardDamon occurring on or before 2021-02-05 02:10:20.

More ↑
2021-02-05
02:10 Reply: Primary Key v Unique Index (artifact: fa6b8042c6 user: RichardDamon)

My understanding from https://sqlite.org/lang_createtable.html Section 3.5 paragraph 2

If a table has a single column primary key and the declared type of that column is "INTEGER" and the table is not a WITHOUT ROWID table, then the column is known as an INTEGER PRIMARY KEY. See below for a description of the special properties and behaviors associated with an INTEGER PRIMARY KEY.

It isn't just the literal phrase "INTEGER PRIMARY KEY" that is magic, but as long as the type of the column is INTEGER (and only that spelling, not just anything with integer affinity) and is made to be THE primary key, even by a primary key constraint clause later in the table definition (but at the initial time of creation), that the column becomes that special alias to the ROWID.

2021-01-31
02:46 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: 66a0f51003 user: RichardDamon)

I appear to have misremembered, or that was the effect of some wrapper that was being used, as typically you are using the notification to cause the GUI update to reflect the file system change, so making it a message can be a common usage (and removes the multitude of issue that can pop up if you try to alter stuff in multiple threads).

That still makes it not really suitable to embed into SQLite itself as a way to notify an application of a data change.

2021-01-30
20:45 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: fabf534668 user: RichardDamon)

Have you actually checked the timing of this sort of thing with SQLite (as opposed to a full-service Client-Server Database). Remember that SQLite is running in-process accessing the same sort of memory as your cache system that you are writing to try to handle this. Yes, SQLite is likely a bit slow to get the data than from cache store optimized for a particular data model, but you also charge against the cache the operations it is doing to keep it up to date and for every data 'miss' to that cache. Also, it is using up memory that could have been added to the SQLite or OS caches to help avoid needing to go to the actual disk for data (since it is likely caching the recently accessed data that those other caches are holding).

IF you are submitting a change that you can't just know what the results are going to be (so you need the RETURNING clause), it seems that actually doing the SELECT to get fresh data isn't going to be that costly.

20:28 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: fa16bfc310 user: RichardDamon)

I believe that SQLite triggers only fire for the connection the executed the statement that did the deed, not other connections, which would require SQLite to create threads to make it happen.

For Windows, if I remember right, the file system can be asked to send a GUI event on a file change, so the GUI knows to update for the change (if needed). This would be beyond what would be expected of SQLite, but the program could set up a file system notification to the GUI on the database change.

11:52 Reply: Does table have rowid? (artifact: 902a026269 user: RichardDamon)

First, it should be pointed out that if the table gets updated between calls like this (even fixed for the lack of use of m) isn't going to always get all the rows of the table or a snapshot at a given time unless the whole thing is wrapped in a transaction.

And once you wrap it in a transaction, you might as well just select all the rows at once and then just step to fetch them n at a time, do what you were planning on doing then step the next n.

2021-01-28
13:00 Reply: Are there any plans / chances of supporting a RETURNING clause? (artifact: 45a0c92708 user: RichardDamon)

Note that there is a BIG difference in the cost of a 'Round Trip' to the database in something like a classical server-based database and SQLite, which is a critical difference. With a server-based database, there are significant communication delays in the 'round trip', reaching out to it and getting a response. With SQLite, there is basically no trip to speak of, as SQLite sits in your process space.

The whole concept of having lots of code to keep a cached version of the database can become a pessimism when you take into account that it will eat up process memory that could otherwise have been used to up the memory cache for SQLite and the OS, rather than building a duplicate cache for the data that is likely sitting in those already.

2021-01-14
17:33 Reply: Can't migrate auto-increment bigint columns (artifact: 9f625e28f3 user: RichardDamon)

The problem is that SQLite doesn't support AUTOOINCREMENT the same as many other databases, but only I n a very specific case, and its creates direct behavior that makes it not just an 'optional' word.

It sounds like EFCore either doesn't really support SQLite or has a bug in its support for it. I would suggest filing a bug report there.

Note that Adding Autoincrement isn't just a do-nothing option, by adding it then SQLite promises that it will NEVER automatically create a duplicate value from anything it has previously generated.

2021-01-02
15:08 Reply: SQLite3 Extension (artifact: 250df65768 user: RichardDamon)

You can simplify this results a bit by remembering that avg(x) = sum(x) / count(x) to

sort(sum(qtyqty) - sum(qty)*sum(qty)/count()) / count(*)

(and that final count() becomes (count()-1) if you are dealing with a sample instead of a full population.)

2020-12-28
19:05 Reply: Off-by-one in testing.html (artifact: 2383d81107 user: RichardDamon)

There are many processors the shift instructions only uses some of the bits in the shift word, and ignores higher order bits in the shift count.

Most processors in the x86 family only use the bottom 5 bits (6 bits for 64 bit mode), and ARM processors only use the bottom 8 bits, so a shift of 32 would zero the register, but a shift of 256 wouldn't.

2020-12-26
01:16 Reply: Creating an emty database (artifact: 21fe8931dd user: RichardDamon)

An alternative to using .save is do something that actually will access the tables, like .tables

2020-12-18
17:24 Reply: Performance decrease over-time using SELECT (artifact: 83f3ec6570 user: RichardDamon)

My first guess is that the query might not be run to completion, and you are piling up resource usage.

2020-12-13
20:43 Reply: Incorrect conversion from unixepoch (artifact: 4503ca486d user: RichardDamon)

Yes, anyone who hasn't run into software that uses unix timestamps that handles time zones by effectively changing the definition of the epoch is lucky. I've seen enough software that just reads in the current local time and treats it as UTC to build the timestamp that it isn't a surprise anymore.

20:39 Reply: Incorrect conversion from unixepoch (artifact: e14d06edf3 user: RichardDamon)

But the first parameter is described as usually a string, but this format being a POSSIBLE exception, using the term 'string' to refer to is doesn't seem that unnatural, and as I understand it, the value for that format could have been provided as a string too. The undefined behavior would also seem to be triggered if it wasn't a string in local time, so that would seem to apply if it wasn't a string.

It also look like the documentation was fixed, and now refers to time values, not strings.

16:45 Reply: Incorrect conversion from unixepoch (artifact: 555201c539 user: RichardDamon)

The Key is the documentation of 'utc'

The "localtime" modifier (12) assumes the time string to its left is in Universal Coordinated Time (UTC) and adjusts the time string so that it displays localtime. If "localtime" follows a time that is not UTC, then the behavior is undefined. The "utc" modifier is the opposite of "localtime". "utc" assumes that the string to its left is in the local timezone and adjusts that string to be in UTC. If the prior string is not in localtime, then the result of "utc" is undefined.

So, etc says that the time provide is presumed to be in local time, and will be adjusted to utc, and if it isn't the results are undefined.

2020-11-18
15:50 Reply: What would be the recommended way to hold an array of ~600 double values? (artifact: c6f7dfe3d6 user: RichardDamon)

(Bytes, not bits) And they only get bigger for the rows that use the bigger values, as the number of bits in the value is part of the internal type of the field.

You also get that you can omit values that aren't needed (one setting disables a channel, no need to store all the unused settings for that channel)

15:07 Reply: What would be the recommended way to hold an array of ~600 double values? (artifact: d61223ccfa user: RichardDamon)

But remember that SQLite doesn't store integers as 64 bit numbers (unless they are very bit) but uses less bits for smaller numbers, so the cost isn't anywhere near as large.

Yes, the blob of 600 float will be smaller, by.a bit, but at the cost of imposing technical debt on the system if anything needs to change, and if some systems don't need all the values, then could be bigger.

12:12 Reply: What would be the recommended way to hold an array of ~600 double values? (artifact: acebecafda user: RichardDamon)

One other option for storing the results would be a table with a multipart primary key, One part bringing the preset number, the second being the setting number, and then a data column for the value.

Something like:

CREATE TABLE Presets ( preset_no INTEGER, value_no INTEGER, value REAL, PRIMARY KEY(preset_no, value_no) );

You can then get a complete set of values for a given preset with something like

SELECT value_no, value FROM Presets WHERE preset_no = ?;

where you bind the ? to the preset_no you want to fetch.

This has the ability to allow you to omit values from a preset if they don't matter to that one.

2020-10-27
11:55 Reply: A suboptimal solution - not using the index. (artifact: b2308374b7 user: RichardDamon)

Right, make that mistake too much since I grew up in C.

11:06 Reply: A suboptimal solution - not using the index. (artifact: 4a9c40b4c8 user: RichardDamon)

But you need to remember that cast(1 as integer) == cast("1" as text), so just because one field is alway a string doesn't mean it can't match a numeric value.

2020-10-06
15:53 Reply: Password protection to database (artifact: 42f79d8b4e user: RichardDamon)

Or to be a bit more explicit, without encrypting the database, it is possible to connect to the database by any program that uses SQLite (like the CLI) and bypass any 'program based' protection. Base SQLite does not include an encryption option. That is availabe as a piad option (SEE) from the suppliers of SQLite, or there are other 3rd party options available.

2020-10-03
12:58 Reply: Column ordering and space saving (artifact: ce77621aca user: RichardDamon)

I think the only fixed length type is Float, all floats will be 8 bytes long, unless it gets stored as another type (but then it isn't stored as a float). If it happens to be an integral value, It might get compresses that way.

2020-09-27
20:44 Reply: Efficiently checking if a TEXT column exclusively contains integers or floating point numbers (as strings) (artifact: afe48fbe53 user: RichardDamon)

Don't forget that SQLite isn't some Client-Server database, but runs in your userspace. Anything that SQLite reads, HAS come into your application, it just stayed inside the subroutine call. The cost to move the results from inside SQLite to 'your app' is small, and if SQLite doesn't have something built in to do the operation, it is likely faster to read the data and decide for yourself.

2020-09-23
12:21 Reply: Disable -wal option (artifact: 792c99a270 user: RichardDamon)

The key issue here is that SQLite runs as 'user' code, so if the user has access to the file, he can do whatever he wants to the file, and if the user doesn't have access to the file, he can't do anything. Yes, this has security implications, but it also has major efficiency implications.

To get the sort of protection you seem to be wanting, you need to put the database behind some security wall which users have no access to, and provide an access API that limits what they can do with the system. This can be done in SQLite, but now the user program no longer is directly using SQLite, but your API that crosses that security barrier, into the 'server' application that is running, perhaps, SQLite.

If that sort of thing is really part of your requirement, the simplest route may not be using SQLite, but some other database package inherently based on the more heavyweight client/server architecture, which can provide such protections.

2020-09-22
13:58 Reply: request on 5 tables (artifact: 8f815d1d26 user: RichardDamon)

It is possible to stack JOINs into one SELECT statement. You do realize that by doing INNER JOINs you are asking only for recorda that exist in ALL the tables, so anyone that only exists in 4 or less out of the 5 tables won't get returned?

2020-09-19
15:42 Reply: Why was there no error message at CREATE TABLE? (artifact: dc460d37f7 user: RichardDamon)

If you parse that with the grammer diagrams,

CREATE TABLE bad(a text b integer)

parses as having a column a with type-name of 'text b integer'

as the type-name production allows for a series of names

Thus, the syntax IS valid.

2020-09-17
12:25 Reply: script de update (artifact: b1db027a4f user: RichardDamon)

As Mr. Hipp says, you need to watch out how back slash is interpreted, as it is an escape character, and might be so interpreted as several levels. Also, you seem to have spaces with the back-slash (unless that is forum formatting) which is likely not in the actual filename.

Using / in the filename tends to get around the problem, at least as long as you don't pass it to some extrnal program that looks on it as the option character.

2020-09-15
15:47 Reply: Querying all compile options used (artifact: c3b78e5617 user: RichardDamon)

It is naturally impossible to change how the program was compiled by sending a command to it. So if some com[iler option caused something to be omitted, there is no way to magically add it by asking for it. (it MIGHT be possible to add some things back by loading an extension).

Many of the options set defaults for options, and you can then send commands to change what that option is set to, but in most cases, you can't change the default (some options line WAL mode are persistent, but that is different)

2020-09-12
13:38 Reply: INNER JOIN error (artifact: 57e6ac7cab user: RichardDamon)

It would be perfectly fine to enclose D3 in DOUBLE quotes "D3", Things in double quotes will always be seen as an identifier (like table or column name) assuming that is a valid by the context, even if the contents of the double quote would otherwise not qualify.

You could defensively double quote ALL such identifiers in a statement and never get an issue.

The only case where they could give you a problem if you used them in a context where an identifier was not allowed, but a string was.

00:33 Reply: Exclusive write-only lock? (Allow only one process read-write, and multiple read-only) (artifact: f9785fb7d6 user: RichardDamon)

I could see adding a mode where a connection says that basically it ALWAYS wants to hold a write lock on the database, and anyone else attempting to get one should get a "busy" return, and when that connection does a commit, it atomically gets back the write lock.

I don't know enough details about how SQLite does this to know if it is actually possible or feasible. It would have to be done in a way that if that app crashes (or doesn't close properly) that the lock goes away.

It also is definitely an 'edge' case, and might not be really worth adding.

2020-09-10
16:39 Reply: Exclusive write-only lock? (Allow only one process read-write, and multiple read-only) (artifact: ab2f360c50 user: RichardDamon)

As people have said, WAL mode will mostly do what you need. It will allow you to have a writer and most of the time as many readers as you want to access the database at the same time.

It will NOT lock a give accessor as the writer, as was mentioned when the writer commits, it gives up its lock, so someone else can take it, and that someone can even start the request before the writer commits and busy wait for it to be available, so that level of locking would need to be done elsewhere.

Also, there will be occasional periods when the writer will need exclusive control to fold the WAL file back into the database. If a reader holds is transaction forever, it can block this from happening, which will cause the WAL file to keep growing lowering efficiency in space and access time.

2020-09-09
23:00 Reply: after crash loosing all records in a multi-GB table (artifact: 663ddca666 user: RichardDamon)

If your app is committing, but the commits aren't apperently being seen in the file, it shouldn't be because of the SYNCRONOUS = OFF, as the OS isn't crashing. It may be that whatever is crashing the program is also corrupting the database forcing the loss of data.

Doing some forensics on the database might give a hint to the sort of corruption, and might help trace the bug, or it might be a lot of work just to say that it got corrupted.

It may be easier to try to trace back to find the cause of the crash.

20:42 Reply: Unique every N seconds. How? (artifact: 5b4d567402 user: RichardDamon)

You could make a 'time' column that is only accurate to N seconds, so you could make a unique constraint base on it.

You might also be able to set up a trigger to detect the condition and reject it

12:03 Reply: content of field must not be identical to another column name of same table for UPDATE SET to work (artifact: 7e25c243d5 user: RichardDamon)

Double quotes are for keywords, not identifiers

You are misunderstanding the comment there (or I will give you that it is a bit awkwardly written).

Things inside single quotes are string constants by the SQL Standard. SQLite will treat it like this that is valid, but in some cases where it can't be a string constant, it will be tried as a column name

Things inside double quotes are identifiers (table or column names) by the SQL Standard. SQLite will treat it like this if it is valid, but if not, but a string literal is, then it will be treated as a string literal.

SQLite also allows for identifiers to be quoted as [id] pr id which isn't defined as such by the SQL standard by makes it more compatible with some other common engines.

It is best to always use single quotes for strings and double quotes for identifiers, that method follows the standard.

11:44 Reply: after crash loosing all records in a multi-GB table (artifact: 52d74003d7 user: RichardDamon)

It sounds like your application 'crashed' before doing a commit, which means the automatic action on opening the database is to perform a Roll-back, discarding the data.

You could perhaps modify your application to commit more often, so you lose less data in a crash.

It would be 'possible' to write code to manually parse the database and perhaps do the equivalent of a 'Commit' at this point, but it is likely better to fix teh problem and not just the symptoms. The biggest problem with forcing a commit is you don't really know (I presume) exactly where you are in the transction, and some of the data may be inconsistant.

2020-09-05
12:54 Reply: Can Richard do some YouTube videos explaining transaction implementation? (artifact: dc29b661a4 user: RichardDamon)

One big difference, the comments are tied to the code, so if you change the code, it is fairly easy to change the comments, so the reflect the new code.

Video, or any other 'external' documentation isn't so clearly attached. First, which version of the code does this documentation apply to, this matters a lot if you are 'documenting' internal details, and not public API. It also may be less clear that the video/other documentation needs to be updated, unless you add a comment like this section described in YouTube video wxyz.

Also, editing a comment to reflect a small change is a small task. To 'edit' a video (to change technical content) tends to mean totally redoing it.

2020-09-04
16:11 Reply: Join Question (artifact: 861a4b7ec7 user: RichardDamon)

My first thought (untested)

SELECT Support.Key as Support_Key, Main.name as Main_Name, Back.name as Back_Name FROM Support JOIN Names as Main on Support.Main = Main.Key JOIN Names as Back on Support.Backup = Back.Key

Since you are pulling from Names twice, you need to give it aliases to refer to the two copies.

16:00 Reply: Wide Characters not aligned in column mode (artifact: d443792afc user: RichardDamon)

Basically, the CLI assumes for column mode that you are using a mono-space font, which if you are using 'wide' characters isn't true. This is to allow it to be simple and universal.

Yes, a 'smarter' program could use some system dependant tool to figure out how wide a string was, and format better, but that is beyond to scope of the CLI

2020-08-18
21:20 Reply: Finding database file (artifact: 8e29dca0c5 user: RichardDamon)

I am not sure if this is the problem, but if you are running this as JavaScript in a browser, you are likely running into the issue that the browser puts the page in a sandbox that limits its access to your computer, and that may be blocking you. After all, you wouldn't want pages you visit on the web to have random access to files on your computer.

2020-08-12
12:30 Reply: history keeping (artifact: 0fd3886326 user: RichardDamon)

As Clemens said, the typical use is that you want to be able to effectively roll back to a given point in time. Thus references to the record aren't to a specific rowid identifier but a sort of unique identifier (sort of unique, as it represents the record over all time, so more than one row has that id)

What I often do is add two fields to each record, one being the 'creation' stamp for this record, for when this version was created, and a second stamp for when it was deleted or updated. This allows joins to directly specify which historic record to find, or use a null (or infinite future depending on what the value used for the second time stamp for current is),

If the current record uses some unique rowid as a key, how do you mark the historical record that matches it?

Now there is an alternate method, similar to what you describe, where you set up a second history table, and you move the historical records to it, adding versioning information. This keeps the 'main' tables smaller and faster, but doesn't allow as easy of querying the history of a record,

2020-08-11
16:10 Reply: Select by column id (artifact: 9170e45fa7 user: RichardDamon)

Your question is a bit confusing and I am not sure if people know just what you are asking, If you want to be able to name the column that will be the rowid, then when you declare the table, just make the column INTEGER PRIMARY KEY, and the name of the column can be used to access the primary key, as well as SQLite will not change by table manipulations (unless you directly change it). Note that a column so declared acts a bit differently than other integer columns, in that it can't be NULL or store a value that isn't an integer.

If you don't do this, then you can reference the rowid by several different names (like ROWID) assuming you haven't named some other column by that name. As was mentioned, one issue with this is that some operations can change this value, so you shouldn't be storing this number somewhere to find the record (the big advantage of declaring a column to be the INTEGER PRIMARY KEY.

2020-08-10
10:52 Reply: Using WAL mode with multiple processes (artifact: e236e37925 user: RichardDamon)

Yes, the issue with the 'Database locked' error is that since WAL mode is a database property and not just a connection property, to set it requires getting a lock to write to the database, and I think it will need an 'Exclusive' lock, so if the database is busy with other connections, it might get locked out from being able to make the change.

02:55 Reply: Using WAL mode with multiple processes (artifact: 65314c7f1e user: RichardDamon)

Yes, WAL mode is a persistent property of the database file itself.

2020-08-03
12:01 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: d17ad0001c user: RichardDamon)

Personally, I sort of feel the opposite. CESU-8 exists because some platforms adopted UCS-2 when it was a thing, and when Unicode expanded past 16 bits, and UCS-2 sort of morphed into UTF-16, applications that still were thinking UcS-2 would generate CESU-8 for non-BMP characters, seeing them as a funny thing built out of two of the things it thought of as characters. Use of CESU-8 is less likely to cause a security issue for an application that generates CESU-8, as such an application will likely be processing internally as UTF-16, and the sort of issues with CESU-8 (that code points have multiple spellings) would tend not to do much processing on the stream at the CESU-8/UTF-8 encoded phase. Applications that process as UTF-8, would be advised to either reject CESU-8 or clean it up very early in processing.

Modified UTF-8 on the other hand, comes about from wanting to have embedded nuls in strings that are defined as nul terminated. The problem here is that ay step that converts the format to UCS-32 or normalizes the string, will suddenly find the string shortened. Any string that is capable of holding a nul character should not be stored as a nul terminated string but as a counted string. Within an application, if you have defined that you are going to be using the technique is one thing, but then using this method on an external data that isn't defined to do so is asking for problems.

It should be noted, that in this case, SQLite is designed to be able to handle strings with embedded nuls, in them, you just need to be providing it with the full length of the string, This means that SQLite doesn't need this sort of trickery if the applications that use it support strings with embedded nuls, and use the right API for them, and it also supports the use of Modified UTF-8 for those applications that want to handle the issue that way, as SQLite internally processes strings as having a specified length, it just allows applications to pass strings to it that are nul terminated and ask SQLite to figure out their length.

2020-08-02
00:56 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: 42a35feef7 user: RichardDamon)

If, for internal storage, TCL converts strings with embedded nulls into Modified UTF-8, so it can use text with embedded nulls without needing to do the needed workaround to actually force SQLite to handle the embedded nul in the string, that is TCL's business, and it is TCL's responsibility to convert it on reading.

If SQLite doesn't make the change going in, it shouldn't make the change coming out.

It doesn't matter if SQLite has a relationship with TCL, it is not dedicated to TCL, so should not be doing TCLs job for it, or it might break other applications' use of this same encoding trick. Why should SQLite be bent to meet some other language's design intent?

Now, one side effect of using this encoding is that the simple collation sequence the SQLite uses by default will now sort the value 0 between 0x7F and 0x80. If TCL thinks this error is significant, it can either not use the overlong encoding that causes the problem or provide a collation that handles it right. (Maybe it could be argued that SQLite could provide such a collation if the expense isn't that high).

2020-08-01
23:30 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: 70b5fcd37f user: RichardDamon)

I am not sure it is SQLites's job to decide which variation(s) of UTF-8 to allow or if it requires the application to use strictly 'standard' UTF8 to allow. CESU-8 is a separate variation (encoding non-BMP characters with the encoding of the surrogates, instead of the one single codepoint0. That variation is basically orthogonal to Modified UTF-8, which deals only with the encoding of the code point 0. Maybe you could say that TCL is using "Modified CESU-8".

20:34 Reply: null character sorts greater than 0x7f and less than 0x80 (artifact: d0aa16abca user: RichardDamon)

But also see Section 4.2 (Modified UTF-8) about why it is also a 'Standard' Variation on UTF-8. It allows the insertion of the NUL character into 0 terminated strings without the character being taken as the End of String code, which is what was attempted

2020-07-26
19:02 Reply: Suggestion: Use strong hashes on the download page (artifact: 86f03fe099 user: RichardDamon)

Forget for a moment the need to break into SQLite's servers ...as you must, since if you can do that, then you don't need any of these attacks at all! You can just upload whatever you want and change the hash, since they're served from the same place.

That's the problem with this whole idea of hashed downloads. I don't understand why anyone has any confidence in them.

Now, if there were some sort of trusted third party who would download things, check them, hash them, and serve up their own hashes, that might be valuable, but I'd expect to pay enterprise IT service sort of prices to get it.

I would think this argument pretty much defeats the request. A stronger hash provides NO more security as, in effect the hash provide NO security beyond knowing that the file came from the official domain. Anyone that can put a false file on the server can change the hash to compare to. The only use of the hash is to check a download from a mirror.

2020-07-12
20:03 Reply: Endianness help (artifact: 3cb0f1f15f user: RichardDamon)

No, if you want to be able to switch from working with large units to bytes, then you would need to change the organization of the bits between big-endian and little-endian, so that the bit-fields lie in sequence in memory.

Thinking about it, unless adding an 8-bit field is very common, I am not sure that switching from bigger words to bytes is actually going to save you work on many machines. If you are inserting a 5 or 10 bit wide field, you are going to need to shift all the bits in all the bytes, and it will likely be quicker to do that work on full words, rather than individual bytes. Only in the special case of adding an exact multiple of 8 bits (that isn't also a multiple of the natural word size) would moving bytes make sense.

17:38 Reply: Endianness help (artifact: 3b719029d7 user: RichardDamon)

The first thing to realize is that unless the processor has instructions to access specific bits in the bytes (and there are processors with this ability) then the numbering of the bits within the bytes is purely arbitrary and by convention.

The statement about the 'typical bit order' for big-endian and little-endian machines sounds to me like a comment I have seen dealing with the traditional allocation of bits in C bit-fields inside a structure, that typically on a big-endian machine, those bit fields will be allocated starting with the MSB, while typically on a little-endian machine, those bit-fields will be allocated starting with the LSB. This only makes a difference if you are using the bit-fields to match hardware or use a union or other type-punning to see the actual value of the underlying word with the bit-fields within it.

For your problem, it turns out that due to the difference is how the big word gets broken down into byte, you might want to do something similar to that also, allocating bit fields from the top of the word, and if you do that, then you will find that multi-bit fields become big-endian, with the MSB at the lower 'bit address', but that doesn't come from how the machine itself orders its values, but how you are using them.

2020-07-07
12:02 Reply: 'Automatically' update R-Tree table (artifact: 6e76a0afbb user: RichardDamon)

Perhaps you could set up a trigger to update the R-Tree when the companion table is updated.

More ↓