SQLite Forum

Endianness help
Login

Endianness help

(1) By curmudgeon on 2020-07-11 08:37:11 [link] [source]

This isn't technically an sqlite question but I'm hoping some of the august minds on this forum might help me in any case. Stackoverflow just hit me with smug soundbites. They tell me endianness doesn't come into it at bit/byte level yet this article (https://www.linuxjournal.com/article/6788) states

"Bit order usually follows the same endianness as the byte order for a given computer system. That is, in a big endian system the most significant bit is stored at the lowest bit address; in a little endian system, the least significant bit is stored at the lowest bit address."

If I save a char to a file on a memory stick and it contains the bit pattern 00000001 would it not be loaded as '@' on a little endian system and 0x01 on a big endian system? I'm thinking I would have to choose an endianness to save the char in (which would mean reversing the bit pattern if the host endianness didn't match my chosen endianness) and then I'd know whether to do the same when loaded onto a different host.

I notice in the sqlite3.c code that Richard saves 16 and 32 bit ints as big endian and converts them back if loaded onto a little endian system but I don't see any reversing of bits within a byte. Is the aforementioned article wrong or am I missing something?

(2) By Tim Streater (Clothears) on 2020-07-11 09:31:20 in reply to 1 [link] [source]

If you're saving a single byte. what has endianness, which has to do with byte order, got to do with it?

(3.2) By Simon Slavin (slavin) on 2023-08-07 14:28:22 edited from 3.1 in reply to 1 [link] [source]

Deleted

(4) By Larry Brasfield (LarryBrasfield) on 2020-07-11 09:50:41 in reply to 1 [link] [source]

For the question posed, Tim's answer suffices. I am responding to the "Bit order usually follows the same endianness ..." assertion.

Bollocks. The convention is pretty near universal (on Earth) that the least significant bit is referred to as bit 0 and the most significant bit in a N-bit word is referred to as bit N-1, regardless of the machine architecture. And for those few machines that provide bit addressability, the field within a machine instruction specifying the bit follows the same convention. Furthermore, when such instructions are exposed in assembly language, the universal convention is followed.

If Linux Journal said that, it's a sign that they're scraping the bottom of the barrel for articles and/or writers.

(5) By Deon Brewis (deonb) on 2020-07-11 10:17:13 in reply to 1 [link] [source]

Bit-level endianness is more of an implementation technicality since you don't generally have bit-level addressing instructions on CPU or storage controllers available*.

Something like 0b1000000 might actually be stored as 0b0000001 in memory (PowerPC and Sparq comes to mind - I don't know if any modern big-endian CPU's still does it), but as a byte it's still 0x80, and if you do >> 7 you still get 0x01. And when it stores it to disk, it will store it as a byte, and the disk will return it as a byte - no matter on what platform that disk is connected to.

In serial wire protocols like UART or 802.3 it does specify that you have to use LSB first, but unless you are literally developing the hardware that sits in between a consistent big endian CPU and UART on a bus that's something other than PCI, you'll never have to deal with it.

NOTE: C bitfields which allow you to address into the middle of a byte. e.g. struct s { signed y:1; signed x:1; }; is a compiler abstraction that translates these into bitwise operations. And it's very much not serializable to portable disk or wire - not because of endianness so much, but because different compiler implementations have different storage representations of it.

(6) By Warren Young (wyoung) on 2020-07-11 11:03:28 in reply to 1 [link] [source]

This isn't technically an sqlite question

SQLite stores integers in big-endian byte order so that databases are binary-compatible across platforms. In the context of SQLite, then, your question is moot: SQLite handles it. The only reason to explore further is to either go off-topic or to dig into the SQLite implementation details.

If the former, please don't. :)

If the latter, are you sure you want to second-guess drh about data storage integrity and cross-platform compatibility? Instead of asking why the code doesn't behave as you expect, you should be questioning your expectations after seeing that SQLite does things this way. Which is more likely the case: you're wrong on this, or drh is?

Stackoverflow just hit me with smug soundbites.

Point us at the question. If there's correction needed, I'm sure there's enough SO users here that the necessary adjustments will occur. Either that, or we'll agree with the answer(s) you got.

(7) By J. King (jking) on 2020-07-11 12:25:10 in reply to 4 [link] [source]

If Linux Journal said that, it's a sign that they're scraping the bottom of the barrel for articles and/or writers.

Linux Journal is long past that: they ceased publication for the second (and seemingly final) time a year ago. I was never a reader, but it stands to reason quality suffered at the end.

(8.2) By curmudgeon on 2020-07-11 13:18:51 edited from 8.1 in reply to 2 [link] [source]

Deleted because it was posted in wrong place.

(9) By curmudgeon on 2020-07-11 13:17:51 in reply to 1 [link] [source]

Thanks for the replies.

After reading Simon's post I'm a bit less worried about the external storage concern but I'm still not totally convinced. Suppose on my little endian machine I have

uint8_t c[] = {'0x01', '0x02'};

and I save that char array to a file on memory stick.

Regardless of how that file is stored on the stick if I load that file back into memory at address a (using an appropriate c function) then

on a little endian machine the bit pattern in a and a+1 will be

10000000 010000000

and on a big endian machine

00000001 00000010

If that's the case I could live with that but it then begs the question why does drh feel the need to store ints in big endian format if the storage device & processor will do it for him (no Warren I don't think drh is wrong - I'm trying to understand).

(10) By curmudgeon on 2020-07-11 13:36:18 in reply to 9 [link] [source]

Suppose I added

uint16_t x = *(uint16_t*)a << 8;

Would the bit pattern for x on the little endian system be

00000000 10000000

and

00000010 00000000

on the big endian system?

If that's the case would that not support the Linux article?

(11) By TripeHound on 2020-07-11 13:41:35 in reply to 9 [link] [source]

As others have alluded, I don't know of any processor/system that stores the bits in a byte in a different order. I've never heard of there being a "bit order issue" when moving a file from one computer to any other. (But I cannot say there has never been a machine on which this would be a problem). If the SO answers are suggesting this, they're either mistaken (IMHO) or someone is being excessively pedantic without making clear it's an extreme possibility only.

If you write (and read) a sequence of bytes then endianess isn't an issue. They will be written and read in the order they are stored in memory (and in such an array, both types of machine would store the 0x01 in a lower memory address than 0x02).

Endianness comes into play when you deal with "larger than byte" things: e.g. multi-byte integers. If you had a 16-bit (2-byte) integer with value 0x0102 then a big-endian machine would store it in memory and write it as the two bytes 0x01 and 0x02, whereas a little-endian machine would store it and write 0x02 and 0x01.

As for SQLite, it ensures that what's written to disk is always big-endian format so that if you move a database file from one system to another with a different format, it will not matter. One of those machines would be able to read the data "natively"; the other would need to swap bytes around as it read and wrote them. But both would handle the database identically.

(12) By Warren Young (wyoung) on 2020-07-11 13:50:40 in reply to 9 [link] [source]

Larry's right: the bits are not flipped end for end in a byte as part of endianness. That concept only covers the order of bytes in a word.

It doesn't even make sense in typical computers to talk about which bit is on the "left" in RAM. The only way you see it as a programmer is the way you'd write the number on paper, least significant digit on the right.

Beyond that, you have to get into DRAM array structures, where you find that bits aren't "to the left" of others at all, but in 2D arrays of strobed columns and rows.

why does drh feel the need to store ints in big endian format...

Because when SQLite was designed, big-endian processors were far more common than they are now, and the most infulential standards for how you store multi-byte words were set by Sun Microsystems, who was a major champion of big-endian architectures. (68k to start, SPARC later.)

Those key standards were the BSD sockets API with its ntohs() and htonl() family functions and XDR, underpinning Sun's highly-successful NFS protocol. This set the so-called "network byte order," which is now arguably backwards, but we're stuck with it now.

if the storage device & processor will do it for him

I think you're reading too much into the others' comments to say it that way.

The whole point of caring about endianness is that the processor has only a single mode.

Some processors can switch modes, but they run in only one mode at a time. Generally speaking, it's set once early in the system's boot and never changed.

Given that we expect a SQLite DB to work on big- and little-endian processors, SQLite has to pick a mode and stick with it. drh could have chosen to store words in the order used by the machine that created it, requiring twice the code for making conversions and a bunch of conditional code to switch between them, but instead he just picked big-endian, probably for the "network byte order" reasons above.

(13.1) By Larry Brasfield (LarryBrasfield) on 2020-07-11 14:39:48 edited from 13.0 in reply to 9 [link] [source]

(Regarding uint8_t c[] = {'0x01', '0x02'}; saved to a file and retrieved:)

Regardless of how that file is stored on the stick if I load that file back into memory at address a (using an appropriate c function) then on a little endian machine the bit pattern in a and a+1 will be 10000000 010000000 and on a big endian machine 00000001 00000010

For this discussion, and certainly for thinking about these issues, it is necessary to distinguish how we refer to bits from how they exist in the hardware. The confusion exhibited in the above quote is, in part, due to conflating representations.

If, after the above file save operation, byte array c[] is read back, from the file, (on any system made and/or used by sane people), then the following C code will print "Bits survive to/from file round trip!" if (c[0] == '0x01' && c[1] == '0x02') printf("%sn", "Bits survive to/from file round trip!"); And if we (the sane) are going to speak of binary representations in text, using the undisputed convention that digits of lesser significance appear to the right of digits of greater significance, we will agree that 00000001 is the correct binary written form of the value we know as 0x01 or 1, and that 00000010 is the correct binary written form of the value we know as 0x02 or 2.

As someone who has looked at the architecture-dependent and platform-dependent parts of the SQLite3 code, I can warrant that nothing there is concerned with straightening out bit order mix-ups that might be suspected to arise from cross-platform database transfers.

The whole concern with bit-order is misplaced, and anybody (on SO or anywhere else) who helped cement or magnify that worry needs a good talking-to. It is a non-problem, best solved by reorganizing the thinking that produced the worry.

(Later edit:) I'm not saying that the worry is insane. It would be possible to (mis)design, then build some hardware where the worry (or nightmare) would be matched by real-world events. The most likely outcome of that build would be great embarrassment by the hardware designer(s) responsible for the error, soon followed by some rewiring (if possible) and redesign. Possibly, the hardware designer(s) would say "That's how we want it to work. The users must adapt!", in which case they would soon be fired from any organization that is not doomed to quick failure. If such hardware was offered for sale, it would be scorned and ridiculed, and would not be in common use or supported by any common operating system or other software. In other words, it will not happen in the semi-sane world we have today.

(14) By Tim Streater (Clothears) on 2020-07-11 14:12:55 in reply to 9 [link] [source]

No it won't. If you have:

uint8_t c[] = {'0x01', '0x02'};

then that will be stored as 00000001 00000010 on what ever machine you have.

The question of endian-ness affects ONLY the order of bytes within a larger item, such as a 16, 32, or 64-bit integer.

If I have four bytes, thus:

uint8_t c[] = {0x01, 0x02, 0x03, 0x04};

then on a big endian machine they will be loaded into a 32-bit register as:

0x01020304

whereas on a little-endian machine they will be loaded into a 32-bit register as:

0x04030201

The bit order within a byte is NOT reversed at all. The bits may be numbered in the opposite order, but they are stored the same.

(15.1) By curmudgeon on 2020-07-11 15:10:28 edited from 15.0 in reply to 14 [link] [source]

Now we are getting down to the nitty gritty and I thank you all for that.

When I first looked at endianness I imagined the bits in a byte being numbered from 0 on the left to 7 on the right (not that the direction mattered) and that was the way they'd be stored in memory or any external device. I saw it the following way

uint16_t x = 1 + (2 << 8);

On a little endian system the 'bulbs' in x and x+1 would look like

10000000 01000000

and on a big endian system as

01000000 10000000

which differ only in the order of the bytes. That linux article turned that thinking on its head and is the source of my confusion. Can you guys confirm I was on the right track originally and we can all get our lives back.

PS Got to say that little endianness is by far the less confusing when thinking 'bulbs'.

(16) By Larry Brasfield (LarryBrasfield) on 2020-07-11 15:09:00 in reply to 15.0 [link] [source]

I looked over that 17 year old Linux Journal article. Most of it was less zany than I expected, but that bit order assertion was there, plainly wrong, and untied to the discussion of bit order issues in contexts where hardware necessarily must order them (and necessarily, to be salable, must correctly reassemble them into bytes, words, etc.) If that assertion was simply struck, the article is somewhat respectable. Apparently, it escaped the timely notice of any competent reviewer.

(17) By curmudgeon on 2020-07-11 15:20:46 in reply to 9 [link] [source]

Thanks for all your contributions and my apologies for wasting everyone's time. As somebody said on one of the threads I read on the subject "the only good endian is a dead endian".

(18) By Tim Streater (Clothears) on 2020-07-11 15:23:00 in reply to 4 [link] [source]

I imagine that's true today, but I've largely given up looking at the bits. See, however:

http://www.bitsavers.org/pdf/sds/sigma/sigma7/900950J_Sigma7_RefMan_Oct73.pdf

This is a machine (or a clone of which, anyway) I worked on back in the day. Look at P7 on Information Format for their view of things :-)

(19.1) By Larry Brasfield (LarryBrasfield) on 2020-07-11 19:05:14 edited from 19.0 in reply to 18 [link] [source]

I said the bit numbering convention is pretty near universal because: (1) There is no law of physics making it so; and (2) There was a time, in living memory for some (such as you and me), when that convention had not been settled upon. Fortunately, it has been settled since before systems in (common) use today were devised and documented.

The now-settled convention makes the number with which we label the bit the same as the base 2 logarithm of its weight when treated as part of an integer. This is much easier to remember than "A bit's number is the word size - log2(its weight) - 1." (Eg Xerox SIGMA 7) This convention is less crazy-making too, a feature I believe led to its ultimate prevalence.

It might be noticed that the OP mentioned "bit address", which could be (and sometimes should be) independent of the bit labeling scheme. The ordering of bits written in text could be yet another independent variable, varying among people. (I understand that the left-to-right <=> earlier-to-later mapping is not universal.)

Fortunately, today, regardless of what people choose to call the bits, or how they elect to depict them, when a byte with value treated as N is written to a portable storage medium or communication channel and read back from that into another competent [a] machine as a byte, that byte will have the value treated as N. For example, I wrote this post on a machine whose least significant bit is called 'Chester', (a purely local and transient convention), yet when displayed on other machines, the glyphs I saw while writing it likely resemble the ones seen by those who read this far. That is true for the same reason this thread is much ado about nothing.

[a. By "competent", I mean outside of the scorn-worthy set. ]

(20.1) By curmudgeon on 2020-07-12 09:03:26 edited from 20.0 in reply to 17 [link] [source]

In case anyone's wondering how I got embroiled in all this. Through a lengthy period of illness I've been trying to teach myself proper c & c++. I came across the bit array in c++ and thought I'd learn a lot by trying to create something similar in c. I wanted it to be such that I could insert and delete blocks of bits into the bit array (similar to a c++ vector) in an efficient way. Doing it on my little endian laptop was fairly easy if time consuming. The good thing about little endian was you could break into the byte array at any index with a uint64_t* and shift the u64 as a whole. Doing the same on a big endian machine is a lot less intuitive and requires shifting in opposite directions.

When I tried to code it so that it would work on a big endian machine then I hit the confusion and started freaking out about bit order, storing and reading from disc etc. Reading everything I searched just added to the confusion. Any questions on the subject on stack overflow were polluted with 'endianness doesn't apply to bits' soundbites but no one debunked the claim in the linux article, no one stated the bit pattern for a byte would be the same on disc regardless of the endianness, no one mentioned DRAM array structures. For that I thank you all and hope that I won't be the only one to learn something from this thread. Going by stack overflow I wasn't the only one asking these types of questions. "It will just work" doesn't do it for me.

(21) By Deon Brewis (deonb) on 2020-07-12 15:42:54 in reply to 17 [link] [source]

If you want to read up more on this topic, the terms you can search for is "consistent" vs "inconsistent" (little/big endian):

  • Consistent little endian (Intel 80x86): bytes right to left, bits right to left

  • Consistent big endian (PDP11, TI 9900): bytes left to right, bits left to right

  • Inconsistent little endian (?): bytes right to left, bits left to right

  • Inconsistent big endian (Motorola 68000): bytes left to right, bits right to left

(22) By Tim Streater (Clothears) on 2020-07-12 16:35:48 in reply to 21 [link] [source]

Ahem. PDP11 was little-endian. Try Sigma 7 instead.

(23) By Richard Damon (RichardDamon) on 2020-07-12 17:38:49 in reply to 20.1 [link] [source]

The first thing to realize is that unless the processor has instructions to access specific bits in the bytes (and there are processors with this ability) then the numbering of the bits within the bytes is purely arbitrary and by convention.

The statement about the 'typical bit order' for big-endian and little-endian machines sounds to me like a comment I have seen dealing with the traditional allocation of bits in C bit-fields inside a structure, that typically on a big-endian machine, those bit fields will be allocated starting with the MSB, while typically on a little-endian machine, those bit-fields will be allocated starting with the LSB. This only makes a difference if you are using the bit-fields to match hardware or use a union or other type-punning to see the actual value of the underlying word with the bit-fields within it.

For your problem, it turns out that due to the difference is how the big word gets broken down into byte, you might want to do something similar to that also, allocating bit fields from the top of the word, and if you do that, then you will find that multi-bit fields become big-endian, with the MSB at the lower 'bit address', but that doesn't come from how the machine itself orders its values, but how you are using them.

(24) By curmudgeon on 2020-07-12 18:50:11 in reply to 21 [link] [source]

Deon, the only thing I could find was

https://lwn.net/Articles/224525/

(25) By curmudgeon on 2020-07-12 19:03:37 in reply to 23 [link] [source]

Sorry Richard, but I'm not sure what you're suggesting. Are you saying I could make an n bit array big-endian regardless of underlying endianness?

(26) By Larry Brasfield (LarryBrasfield) on 2020-07-12 19:27:07 in reply to 21 [link] [source]

(Wondering why the topics least tied to reality generate the most posts ...)

As Mr. Damon said,

The first thing to realize is that unless the processor has instructions to access specific bits in the bytes (and there are processors with this ability) then the numbering of the bits within the bytes is purely arbitrary and by convention.

The processors where bit numbering has some reality independent of yammering about it are even fewer, being those which permit bits specified by their number to be accessed or altered. Only those could be assigned a bit-endian attribute by somebody who dug up a working machine (or its physical design documents) and had to categorize it as little-bit-endian or big-bit-endian, without being able to point to documents merely labeling the bits as bigger or littler.

Regarding Mr. Brewis' claim,

... bits right to left ... bits left to right ...

Sorry, but that can only be something that might appear in documentation, (and would depend on whether it was held upside down or not), but is not an attribute of the machines themselves. As Mr. Warren said for RAM, the hardware is utterly agnostic about right or left, top or bottom, fore or aft.

Regarding Mr. Brewis' claim,

... bytes right to left ... bytes left to right ...

Urrghh. Another confusion factor. The usual criterion for byte endianality is the relation between byte address ordering and arithmetic weight of the bytes within larger words. It is only when such matters are committed to diagrams that notions of right or left can even be sensibly applied.

Furthermore, not every machine must be either little-endian or big-endian. For an example, see PDP-endian. In the general case, if we compile something like this C program: #include <stdio.h> typedef long MachineWord; union ExposeEndianality { MachineWord asMachineWord; unsigned char asBytes[ sizeof(MachineWord) ]; } endianality = {(MachineWord)0x0706050403020100L}; int main() { int i; for (i = 0; i < sizeof(MachineWord); ++i) printf("%1d\n", endianality.asBytes[i]); return 0; } , the small numbers comprising endianality can be emitted in any order, limited only by the degree to which the machine designer(s) wished to avoid perversity. That PDP machine might have produced 2 3 0 1 . I defy anybody to write a proper C program which will expose bit-endianality in similar fashion.

(27) By Richard Damon (RichardDamon) on 2020-07-12 20:03:16 in reply to 25 [link] [source]

No, if you want to be able to switch from working with large units to bytes, then you would need to change the organization of the bits between big-endian and little-endian, so that the bit-fields lie in sequence in memory.

Thinking about it, unless adding an 8-bit field is very common, I am not sure that switching from bigger words to bytes is actually going to save you work on many machines. If you are inserting a 5 or 10 bit wide field, you are going to need to shift all the bits in all the bytes, and it will likely be quicker to do that work on full words, rather than individual bytes. Only in the special case of adding an exact multiple of 8 bits (that isn't also a multiple of the natural word size) would moving bytes make sense.

(28) By Larry Brasfield (LarryBrasfield) on 2020-07-12 21:13:46 in reply to 1 [link] [source]

IMO, no discussion of endianality is complete without Danny Cohen's treatise on the subject, written 40+ years ago. Do not let the date fool you as to the seriousness of the subject. It is well worth reading for its careful elaboration of the issues.

(29) By David Jones (vman59) on 2020-07-12 23:20:51 in reply to 28 [link] [source]

I've read that in Arabic, decimal numbers are little-endian, but when the notation was adopted by Europeans it became big-endian because the writing went from right-to-left to left-right while the numbers themselves kept the same order.

(30) By Gunter Hick (gunter_hick) on 2020-07-13 07:21:38 in reply to 29 [link] [source]

Are you sure it is not the other way around?

"There were 12 Apostles." strikes me as litle endian (the 2 is later than the 10) whereas ".seltsopA 12 erew erehT" would seem to be big endian the (2 comes before the 10).

(31) By Ryan Smith (cuz) on 2020-07-13 07:40:10 in reply to 30 [link] [source]

Are you sure it is not the other way around?

The notion he tries to convey that the writing order reversed for us, but the numbers stayed the same and now is still "little endian" must indicate that he meant "little endian" from our current point-of-view and not from the Arabic point-of-view.

Your point is not wrong though, but the very notion of "end" is subjective. Some might say the end of the atmosphere is roughly 100km up while others might say it's at ground-level. They'd both be correct.

(32) By curmudgeon on 2020-07-13 08:20:24 in reply to 29 [link] [source]

When I first read your post I thought "feck me, he's read Danny Cohen's treatise in Arabic" :-)

(33) By Gunter Hick (gunter_hick) on 2020-07-13 08:28:35 in reply to 31 [link] [source]

BTW the romans until about the 4th century BC would alternate writing direction each time they changed lines, just like plowing a field and for similar reasons. While reading a scroll, one would transfer it from the RH spindle to the LH spindle and be unwilling to "rewind" just to read the next line, so writing direction was simply reversed.

It took them a while to duplicate dividing the scroll into pages like the Hebrew scholars (who invented fixed length lines and checksums in order to faithfully reproduce the Torah; each line had the same number of letters and the numerical values of the letters were added to produce a checksum) had been doing for centuries.

(34) By David Jones (vman59) on 2020-07-13 12:34:21 in reply to 31 [source]

Intel is little-endian because if you write the bytes in the order of ascending address, the least significant byte (lsb) is first. When writing the number 12, I write the most significant digit (1) first, so I'd consider that big-endian.

Perhaps it should be called little-beginian, but that's just awkward.

(35) By Deon Brewis (deonb) on 2020-07-13 13:01:45 in reply to 26 [link] [source]

I agree, because of the nature of instructions on the CPU, you won't be able to observe bit-endianness from the CPU itself. It would require probing it with a logic analyzer.

Same for storage - there are no instructions that says: "Write 1 bit to the drive". However, there are instructions that write 1 byte, which is why byte-endianness matters in practice, but bit-endianness doesn't.

(36) By Larry Brasfield (LarryBrasfield) on 2020-07-13 13:14:18 in reply to 13.1 [link] [source]

It doesn't even make sense in typical computers to talk about which bit is on the "left" in RAM. The only way you see it as a programmer is the way you'd write the number on paper, least significant digit on the right.

Without disputing that, (particularly as it relates to RAM), the idea of right and left applied to bits within a larger word has been around for over 50 years [a], and I am pretty sure the terms have been applied consistently. All of the dozens of CPU's that I have examined the instruction set for have shift and/or rotate instructions, usually for both directions. (In some, the shift amount is signed so a single instruction covers both directions.) Without exception, a (by 1) left shift/rotate moves the most significant bit value, either out of the operand (into the carry flag) or around into the LSB, and the other bit values get a *= 2 promotion. Similarly, a (by 1) right shift/rotate moves the least significant bit value and most of the bit values get a /= 2 demotion.

[a. See Intel 4004 datasheet, where the RAL and RAR instructions appear on page 8-18. The 4004 design began in 1970. CPU's made with discrete logic (or individual transistors or vacuum tubes appeared earlier followed the same convention, for example CDC 6400+ with "arithmetic right shift" and "left shift" instructions. ]

Bit endianality within computers has long been well settled; the littlest bit is on the "right", whether it is called bit 0, bit Nw-1, or Chester.

(37) By curmudgeon on 2020-07-13 14:09:50 in reply to 28 [link] [source]

Larry, I think if I had read that treatise at the height of my confusion I'd have topped myself. Mind you, I'm not sure if that would've meant I'd have hung myself or cut my feet off.

(38) By curmudgeon on 2020-07-15 16:58:08 in reply to 17 [link] [source]

It's good to know you're not alone.

https://www.embeddedrelated.com/showthread/comp.arch.embedded/60464-1.php

(39) By JFMcLuggage (mccon01) on 2023-08-08 16:38:01 in reply to 33 [link] [source]

This is probably why, 
unless I've missed a recent paper,
we still can't read Minoan Linear A.