SQLite Forum

Timeline
Login

50 most recent forum posts by user wyoung

2022-01-27
08:51 Edit reply: Docs: julianday() returns a real number, not a string (artifact: e4fbd181f6 user: wyoung)

I don’t see it in the current online docs

It’s right here.

must be a recent addition.

It appears to go back about two months.

However, there’s an older facility going by that name in SQLite going back to at least 2003.

08:50 Reply: Docs: julianday() returns a real number, not a string (artifact: 03dba7c847 user: wyoung)

I don’t see it in the current online docs

It’s right here.

must be a recent addition.

It appears to go back about two months.

However, there’s an older facility going by that name in SQLite going back to at least 2003.

2022-01-26
11:49 Edit reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 24e8d4b375 user: wyoung)

Not sure stackexchange is an authority on any matter

I wrote that answer, and I pointed you to it so I don't have to rewrite that long answer here. I then took special care to point you to the SVR3.2 manual link, which I should think is an authoritative source on UNIX System V Release 3.2. If you don't believe AT&T on such matters, I don't know what you'll accept as authoritative.

If you're still feeling "right" about this, find me a first-party AT&T or BSD reference manual using the phrase "User System Resources". I've just searched my local archive of historical UNIX manuals, and I can't find one.

Since my collection is merely large, not comprehensive, I then tried to find that phrase with a full-text search on archive.org. Excepting either irrelevancies (e.g. something on Windows's GDI library) or repetitions of the TLDP document you linked to, I came up empty.

More, I was a Unix user at the time, and I assure you, we never used that backronym expansion. Until SVR4, /usr was the place where the users' files lived, as opposed to the system files. The first time I heard it used was probably in the mid to late 1990s, after a quarter century of Unix's history had already passed.

it is not for variable data

That is one of the many changes in SVR4, where /var was born.

The thing is, that change was made some 18 years into a history of such files properly living in /usr. As we have agreed, rules change.

This from the Linux Document Project:

Yes, that doc supports my contention: it's a backronym.

I'm not telling you that people don't use it. I'm telling you that it isn't what Ken Thompson and Dennis Ritchie had in mind when they named it /usr.

11:45 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 961018b3d6 user: wyoung)

Not sure stackexchange is an authority on any matter

I wrote that answer, and I pointed you to it so I don't have to rewrite that long answer here. I then took special care to point you to the SVR3.2 manual link, which I should think is an authoritative source on UNIX System III Release 3.2. If you don't believe AT&T on such matters, I don't know what you'll accept as authoritative.

If you're still feeling "right" about this, find me a first-party AT&T or BSD reference manual using the phrase "User System Resources". I've just searched my local archive of historical UNIX manuals, and I can't find one.

Since my collection is merely large, not comprehensive, I then tried to find that phrase with a full-text search on archive.org. Excepting either irrelevancies (e.g. something on Windows's GDI library) or repetitions of the TLDP document you linked to, I came up empty.

More, I was a Unix user at the time, and I assure you, we never used that backronym expansion. Until SVR4, /usr was the place where the users' files lived, as opposed to the system files. The first time I heard it used was probably in the mid to late 1990s, after a quarter century of Unix's history had already passed.

it is not for variable data

That is one of the many changes in SVR4, where /var was born.

The thing is, that change was made some 18 years after such files living in /usr.

This from the Linux Document Project:

Yes, that doc supports my contention: it's a backronym.

I'm not telling you that people don't use it. I'm telling you that it isn't what Ken Thompson and Dennis Ritchie had in mind when they named it /usr.

10:59 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 8984fe73f2 user: wyoung)

a lot of people…mis-attribute the /usr folder as a "User" folder

But it was. See the primary source linked from footnote 1.

"usr" = "Unix System Resource"

Sorry, but that's an ahistoric backronym.

it is really not the correct folder for these kinds of files.

It was back when /usr also contained the users' home directories. Thus the existence /usr/spool/mail on my UNIX V7 virtual machine.

However, rules change over time, many times, not just from RHEL 7 to RHEL 8.

04:39 Edit reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: 75a09dfee8 user: wyoung)

I'm going to guess that this is RHEL or a derivative, where the problem isn't systemd but SELinux. In RHEL8, they tightened down a lot of the rules on use of directories to properly enforce the FHS rules.

The bottom line is that /usr/local isn't where user data is supposed to go, so SELinux was tuned to detect such attempts and flag them as likely bugs or security sandbox escape attempts.

The usual location for server databases is under /var somewhere. If this is used by only a single service, then /var/mine/db might be sensible.

Read up on FHS for more ideas.

Realize that you're way out past the topic for this forum. I responded only because it comes up in your use of SQLite, but SQLite can do nothing about these matters, nor should it try. Put your DB where your OS expects to allow background services to write files, and your symptom will go away.

04:31 Reply: How to start a program that use SQLITE with SYSTEMD ? (artifact: d15a79e636 user: wyoung)

I'm going to guess that this is RHEL or a derivative, where the problem isn't systemd but SELinux. In RHEL8, they tightened down a lot of the rules on use of directories to properly enforce the FHS rules.

The bottom line is that /usr/local isn't where user data is supposed to go, so SELinux was tuned to detect such attempts and flag them as likely bugs or security sandbox escapes.

The usual location for server databases is under /var somewhere. If this is used by only a single service, then /var/mine/db might be sensible.

Read up on FHS for more ideas.

2022-01-08
05:38 Reply: Possible documentation update (artifact: 1621c18e20 user: wyoung)

All of the "became executable" warnings on the second should be fixed.

This typically happens when working on Windows, where the ancient FAT "archive" bit is used to emulate Unix's executable flag in POSIX contexts.

Fossil warns you of this in status output, saying EXECUTABLE rather than CHANGED.

2022-01-06
12:21 Reply: Failing builds on Apple M1 due to CPU arch and GNU Coreutils (artifact: fa97261a81 user: wyoung)

The key difference seems to be whatever this NixOS environment does to the stock platform.

2021-12-21
18:55 Reply: Distributed backups from a database of databases (artifact: f899d22aa2 user: wyoung)

If you want a distributed SQLite database, why do it that way? Use one of the several distributed SQLite variants and be done with it. The major ones are BedrockDB, rqlite, and dqlite. Now all machines in your network have a copy of the database, and it's kept up-to-date.

2021-12-18
11:27 Reply: \n breaks reading from SQLITE3 (artifact: 89b0628913 user: wyoung)

In addition to the other answers, I have to ask, what did you expect to see instead? An actual "\n" pair of characters?

Realize that the C compiler turns "\n" inside a string into a byte called a newline — value 10 decimal — on ASCII machines, and it is this byte that SQLite is storing for you and faithfully retrieving. The "\n" pair only exists in the program's source code.

2021-12-17
16:51 Reply: Trying to "Open Database" Safari History.db file and get Error: "could not open safari database file reason: unable to open.. (artifact: 9708a483e0 user: wyoung)
2021-12-15
18:47 Reply: persistent in-memory database (artifact: 5bfecb579c user: wyoung)

There are two kinds of Optane storage.

One kind is "Optane SSD", and as far as I know, it just looks like a regular block device: put a filesystem on it, mount it, and use SQLite as normal.

The other kind is a form of NVDIMM, which the OS kernel handles differently. See for instance the RHEL docs on this, which should apply to other Linuxes of the same generation or later.

I don't know if it's legal to put a filesystem onto an NVDIMM device. If so, then you can treat it like a special type of SSD. If not, then we have to ask if SQLite supports raw device I/O.

You certainly cannot use SQLite with raw devices in WAL mode, since that wouldn't permit SQLite to store the separate WAL and SHM files.

It might be possible to hack the traditional I/O mode of SQLite to use a raw device, though.

18:22 Reply: High CPU RtlpEnterCriticalSectionContended when using SQLite (artifact: d6b01809e7 user: wyoung)

I'm not going to tell you that a specific CPU cost is expected because it depends on your data, your usage pattern, and so forth, all details you've left out of your question.

If this is a synthetic benchmark doing inserts as fast as possible, then it's likely that your 30% CPU hit value is bogus: it only occurs in this unrealistic scenario.

If instead it's inserting at a normal rate and you can still measure such an increase, then we'll want to see what it is you're trying to do to give you any kind of assurance that your measured value is sensible.

All I was actually saying above is that some CPU hit is expected. The only "free" lock is no lock. There do exist lock-free data structures, but SQLite is not such a one.

12:49 Reply: High CPU RtlpEnterCriticalSectionContended when using SQLite (artifact: 4e15083181 user: wyoung)

If this is a single-threaded application and you're comparing to a :memory: SQLite DB, then I do believe there are settings you can make in SQLite to disable much of the locking.

However, as soon as you start talking about disk storage, you talk about the possibility of multiple readers/writers, in which case you need to talk about locking and the overhead that goes with it, else you dismiss the ACID guarantees SQLite provides.

In other words, the speed hit is ACID versus non-ACID.

09:14 Reply: High CPU RtlpEnterCriticalSectionContended when using SQLite (artifact: 575622f38d user: wyoung)

This is a 30% increase relative to what?

What did you switch from?

2021-12-14
11:11 Reply: FYI: binary dump (artifact: 6723710fae user: wyoung)

The point of this software is to present an exact binary dump of the on-disk SQLite data. Why would it be translated in any way?

2021-12-13
19:14 Reply: FYI: binary dump (artifact: b2805e7c76 user: wyoung)

why BE?

Surprise! SQLite is big-endian.

I assume it is so because it was born when Sun SPARC machines were the pinnacle of "serious Unix," which in turn caused "network byte order" to pass as a whitewash for "SPARC byte order".

2021-12-12
20:46 Reply: Database file name encoding -- is it really UTF-8? (artifact: 097f34f202 user: wyoung)

What proof do you have that the locale affects the file names stored on disk?

Try this for me:

$ touch hello
$ ls hello | od -c

Then do the same in your native language. (e.g. "hola" for Spanish.)

Please also post the output of the locale command on your system.

2021-12-10
09:46 Reply: Sqlite3 can happend deadlock? (artifact: fe9f4b1db7 user: wyoung)

Or his hardware is as old as his SQLite version implies, with disk failure causing I/O delays.

2021-12-07
00:44 Reply: Venting about inconvenient limitations, feel free to ignore (artifact: c6d1df189e user: wyoung)

Doesn't that problem go away if you use two SQLite connections, one to the temp DB, one to the main DB?

2021-12-06
11:53 Reply: Failing builds on Apple M1 due to CPU arch and GNU Coreutils (artifact: 73165e901d user: wyoung)

Yes, those files do need to be updated from time to time…

…but why are you using GNU/Homebrew tools on macOS in preference to the platform tools? It doesn't fail when you use Clang and /usr/bin stuff, so why go out of your way to invite trouble here?

2021-12-03
20:53 Reply: Benchmarking SQLite on Apple's M1 ? (artifact: 9a4ed48741 user: wyoung)

slower than a 2.2 GHz Ryzen 9 5950X

…which costs about as much as an entry-level M1 Mac Mini for just the CPU chip.

The AMD box is probably drawing at least 3x the power to achieve that, too.

2021-12-01
20:56 Reply: Cursor Keys broken on CLI ? (SQLite version 3.37.0 2021-11-24 21:16:32) (artifact: 96859161d8 user: wyoung)

It means whoever built that binary didn't have the development files installed for libreadline, libedit, or similar.

This is a good reason to build your own binaries, which isn't difficult.

2021-11-05
19:03 Reply: App file format, large image texture blobs? (artifact: e2b8d0cb72 user: wyoung)

In the meantime, it’s easy to change the constants in the test program, build it for your target, and run it there.

2021-11-04
16:18 Reply: Question about memory management (artifact: 442166cee9 user: wyoung)

If importing your CSV via the CLI works, then the problem is in the Clarion code or its SQLite adapter.

Try one and see.

2021-11-02
03:22 Reply: sqlar : how to remove files (artifact: af51dfd9aa user: wyoung)

Making glob an option has some merit, but I think following the lead of the standalone sqlar makes more sense: insert and update rely on the shell (or CRT library) to expand the command line, whereas commands that deal with file names inside the archive such as "delete" and "list" use internal globbing because the CRT/shell can't see those names.

I think this follows the principle of least surprise.

2021-11-01
18:39 Reply: sqlar : how to remove files (artifact: 725daad511 user: wyoung)

Thank you!

I was looking into the sqlar commit and noticed that it adds a use of GLOB for this case and was worried that it created a potential double-expansion problem until I realized why it does that: it only does it for -e, -l and -d, all cases where you're not dealing with files the shell can see.

Maybe this feature should use GLOB as well, for the same reason? How do I delete s* from a sqlar file otherwise, short of listing the archive and doing my own glob expansion, leading to problem #2 above?

18:12 Reply: sqlar : how to remove files (artifact: e90226f812 user: wyoung)

You also get it from a bare sqlar command or anything else that doesn't pass the command line argument parser such as sqlar -help. I only chased it down to the C level to show that the feature wasn't added since this thread was started.

Contrast Larry's contribution....

17:40 Reply: sqlar : how to remove files (artifact: 8e47b157eb user: wyoung)

Is this by design

It's by mis-design, but the error is in your OS of choice, not in SQLite/SQLar.

You don't tell us outright that you're using Windows, but I can tell from your use of "*.*" that you are. Unix people tend not to use that wildcard because it means something different from a simple *, which is shorter and usually more correct besides.

This in turn tells me that you're using an OS where they decided that it's up to the called program to expand wildcards instead of leaving that to the shell. This has a number of bad side effects:

  1. Most immediately relevant to you, it means the expansion of *.* doesn't happen until the program is up and running and the sqlar DB has been created, so expanding that wildcard will include the just-created file. On Unix type systems, the equivalent expansion happens before the program's even run, so your symptom doesn't occur there.

  2. A lot of functionally duplicate code that doesn't behave the same way. Even in the case where you assume the arguments are expanded by the underlying C runtime library, a program's command line expansion may function differently depending on whether it was built with VC++, GCC, Clang... In the case of GCC, it may differ between native GCC, MinGW GCC, Cygwin GCC... Then there's the fact that programmers being programmers, some aren't satisfied with the default wildcard expansion and so write their own, leading to still more different ways wildcards are interpreted.

    You may guess that the plethora of Unix shells means the same problem happens on the other OSes, but it doesn't for two reasons. One, a given user tends to use just one shell, so they can get used to how their shell expands arguments in all cases. Two, POSIX sets minimal rules that all POSIX-type shells must obey, so they can't drift too wildly. You still get some oddities like whether wildcard expansion happens in the middle of a word or not (e.g. zsh vs bash) but goto "One".

  3. Programs not written in a language that has runtime support for glob expansion on program initialization (e.g. Windows batch files) have to do tricks to expand their args. It doesn't matter what language you write your program in on a Unix box: command line expansion is the shell's problem, so it always happens the same way, modulo the quibbles in point #2.

All of which is why I use a POSIX shell on Windows whenever possible. I get a consistent experience not just relative to macOS, Linux, BSD, etc. it means I get consistent behavior from one program to the next. The equivalent command

  $ sqlite3 foo.sqlar -Ac *

doesn't add foo.sqlar to the archive because it doesn't exist yet at command line expansion time.

there does not seem to be a command to remove a file from the archive.

That's true of the version built into the sqlite3 shell, but not of the standalone sqlar utility. It got that feature in 2016.

Perhaps that feature should be backported.

2021-10-27
17:33 Reply: download checksum doesn't match (artifact: b4b1a030e1 user: wyoung)

The default mode of sha3sum is 224-bit, but the download page uses 256-bit sums.

2021-10-01
18:38 Reply: Expired certificate (artifact: c3148b526e user: wyoung)

I saw the same thing with Apple Mail on my iMac yesterday. All the browsers were fine with HTTPS on the same server, including Safari, but IMAPS gave bogus warnings about the R3 intermediate cert. Same cert as for HTTPS, but it was being rejected.

I did reissue the cert and restarted the server, but it seemingly didn’t clear up until about an hour of fighting later. There seems to be a cache somewhere, because I can’t tie my last measure before it started working to any reason it should have done the trick.

2021-09-29
18:03 Reply: Multithread processing, storing in memory and writing to database (artifact: de801fde4f user: wyoung)

RRDTool matches pretty well with those criteria, and was designed pretty much exactly for the OP's sort of problem.

17:22 Reply: Multithread processing, storing in memory and writing to database (artifact: 2749791d5f user: wyoung)

"Optimal?" Inherently not, else we would all be using relational databases for everything.

Aside from your concurrency problems, one of the characteristics of B-tree type storage is that when a bucket spills over, the tree has to be rebalanced, which takes time. What this means in terms of your problem is that insert time varies depending on the state of the B-tree, which means you can't predict the overhead of the insert while other data continues arriving in real time. The only way to avoid dropping data or queueing it up for batch inserts (and then hoping you don't spill again) is to overprovision the hardware so much that even the worst case spill occurs in the time slices you have available.

So yeah, your life sucks because you keep aiming the foot-gun, pulling the trigger, and then wondering why it hurts so much each time. Stop it!

16:17 Reply: Multithread processing, storing in memory and writing to database (artifact: 27a5669867 user: wyoung)

I think you should stop trying to pound that nail in with the butt of your screwdriver. Use a time-series database for this. It's what they're for.

2021-09-28
20:23 Reply: Reset database (artifact: ec975e32c8 user: wyoung)

its size is 0.

Opening an existing read-only database with data in it — implied by your use of the "reset" language — will not drop the database file to 0 size. Only opening a database in a writable directory will do that.

If that's all you want, then I don't see what you're looking for beyond

   unlink("mydb.db");
   sqlite3_open("mydb.db", &dbhandle);

There's your DROP DATABASE.

I am not sure whether VACUUM locks the database.

If only there were a way to be sure... (Hint: search for the word "lock" on that page.)

side effects that I haven't thought of

Calling truncate(2) on someone else's file is a great way to cause file corruption when another process tries to write into its still perfectly-good file handle.

At least with removing the old file and creating a new one, POSIX file semantics permit the old process to hang onto its doomed instance of the old file name until it closes it. Only once all the open FDs are closed will the file actually disappear from the filesystem.

But beware: if you're using WAL with this scheme, they'll fight over the SMH and WAL file names, again causing corruption.

Once again, I think you need to provide more detail about the actual use case instead of prescribing solutions ahead of knowledge.

19:37 Reply: Reset database (artifact: caeec53671 user: wyoung)

If you only have read access to the DB, how could you reset its state? The whole point of a DBMS like SQLite is that it presents access to the current transaction state of the DB. Rolling it back to a prior state is a transaction (or cancellation of all prior transactions) of its own, so it inherently implies write access.

I think you've got an XY problem here. Instead of telling us what the problem's solution should look like, tell us what problem you're actually trying to solve, which led you to believe SQLite should have a way to do what you're asking.

If you open a read-only database, and it causes this "failure" of yours, why is rolling back to a fresh new database the right behavior, especially given that there is explicit support for opening the database read-only?

19:24 Reply: Can I detect if the disk supports WAL mode? (artifact: a57302c81b user: wyoung)

I guess mmapped files only work on the same OS.

The whole point of mmap(2) is to map a file into the unified virtual memory space of the machine. QEMU provides a private virtual memory space for its VM, so there is no way for mmap to work short of distributed memory schemes, which are super-slow and fiddly besides, which is why no current OS bothers to provide such things. It was a thing back in the days of "cluster computing", but we don't architect systems that way any more, because we learned it sucks.

I'd be surprised if you could mmap between two Docker containers even on the same host, with the same OS, shared local file store, matching CPU type, etc. If it is possible, it should only be so through careful configuration, but I couldn't find any docs online about doing so.

If you have two containers today, can I assume you intend to have many containers later? If so, then one of the distributed SQLite variants may be more what you need. It'll solve this problem in a much better fashion. This is the modern answer to cluster computing.

17:28 Reply: Can I detect if the disk supports WAL mode? (artifact: aac760412a user: wyoung)

two OS's do not know about each others locks

They probably can't do shared memory across the boundary, on purpose, else what's the point of containers? Cross-process memory access is the antithesis of containerization.

flawlessly

In a one-line C call, no.

There are methods for detecting that one is running inside a Docker container, which may suffice, indirectly.

Short of that, the best I've got is to run all of the WAL tests. Find the one(s) that flag the problems, dig into the tests, and find out which failure modes are being triggered. That may allow you to distill the test to something small and cheap enough to run on application init.

It'll be brittle, though, detecting only your current failure case.

2021-09-24
12:35 Reply: cannot start a transaction within a transaction using sqlite shell (artifact: 8d1eef2ecf user: wyoung)

You can ask wsl --status which version is the default on your system.

WSL1 is still in use on many systems from legacy installations, but also because it works inside VM systems that don't support double-virtualization, as with ARM Windows on Apple M1.

One of many differences between the two is that WSL1 uses a POSIX gloss on NTFS as the filesystem, whereas WSL2 uses an actual Linux kernel with regular Linux filesystems. This naturally has a whole laundry list of implications for SQLite's locking and file I/O semantics.

03:05 Reply: cannot start a transaction within a transaction using sqlite shell (artifact: 23b691d53d user: wyoung)

WSL2 works hugely differently from WSL1, correcting a large number of inherent flaws in WSL1. WSL2 still isn’t perfect, but being a lightweight Linux VM rather than an NT “personality,” I would class WSL2 as useful for some production tasks.

Can we please be clear about which version we’re talking about?

2021-09-20
01:42 Reply: Compiling FILEIO.C (artifact: 6a3c334684 user: wyoung)

That file is in src/, the same place you got the previous two files from.

Alternately, from an unpacked copy of the SQLite source tree, this might work:

 C:\PATH\TO\SQLITE\SOURCE> cl ext/misc/fileio.c -Isrc -link -dll -out:fileio.dll
2021-09-19
16:14 Reply: Database anomaly (artifact: acb1529ac8 user: wyoung)

I’ve seen this when mixing UTF-8 and UTF-16 improperly. Cygwin binaries with cmd.exe or native binaries with MinTTY, etc.

2021-09-17
06:01 Reply: Javascript enforcement (artifact: 9e9e41dddb user: wyoung)

These oft-repeated objections are answered in the javascript.md doc up-thread.

03:12 Edit reply: Javascript enforcement (artifact: 17083ba9ab user: wyoung)

useless Javascript.

Other posts in this very thread explain why this particular bit of JavaScript is useful. The document Stephan linked you to explains why all of all the other bits of JavaScript in Fossil — the DVCS backing SQLite and this very forum — are useful, too. Moreover, it catalogs the pains we've taken to reduce the use of it and to provide sensible fallbacks where practical.

You're welcome to disagree with individual elements of this on a technical basis, but to dismiss an entire technology the way you've done here is, frankly, unhinged from reality.

use plain links

Are you paying the bandwidth bill for robots to repeatedly download multimegabyte blobs as fast as possible?

The /src links on the page one click away via the links at the bottom of that page are similarly protected since they can cost the public SQLite servers arbitrary CPU time, not just bandwidth. If you let robots traverse the /timeline and /info trees on a Fossil repository without restriction, they'll repeatedly download the entire history of the project, with each version downloaded requiring an expensive tar+gz or zip operation.

Fossil has a cache to cope with this to some extent, but with so many versions in these projects' histories now, any reasonably-sized cache would be busted by allowing robots to run wild through the hyperlink tree. The cache would churn without end.

nuke that sucker from orbit.

Easier said than done, particularly when you're not even on the list of people potentially tasked with doing the doing.

Javascript is evil

Evil is in actions, not in things. Nouns cannot be evil; only particular uses of those nouns can be evil.

Javascript is the root of all evil.

All evil began in 1995?

Badly written javascript is responsible for 99.999% of all safety and security issues

That's not what the data show. The language topping that list is the one SQLite and Fossil are written in, and it's implicated in about four times the number of recorded incidents as JavaScript.

EDIT: Should you not therefore place about four times as much trust in the safety of JavaScript as in C?

And even C isn't responsible for more than half. It can only claim a plurality among the many other inherently-dangerous programming languages, since several of which are quite popular, preventing any language from taking a majority share of the blame.

there has never existed goodly-written-javascript in the entire history of the universe

You've got at least three of the authors of Fossil's JavaScript here in the thread. Your claim is that none of us have written any good JavaScript, either?

it was barfed-up by a moron.

I'm going to be charitable and assume you're using that term in the obsolete technical sense. You are objectively wrong on this point as well.

If you're allowing your technical definitions — and who better than one so pedantic as yourself to insist on precise use of technical words? — to expand to the point that one so objectively successful as Brendan Eich qualifies as a moron from an evaluative psychology standpoint, virtually everyone on the planet is also a moron. To take a position disregarding the value of most of the planet's population is to disconnect from society.

And by the tone and content of this post, you've also disconnected from polite society even among those you consider non-morons.

02:54 Edit reply: Javascript enforcement (artifact: 2a0b7ba20c user: wyoung)

useless Javascript.

Other posts in this very thread explain why this particular bit of JavaScript is useful. The document Stephan linked you to explains why all of all the other bits of JavaScript in Fossil — the DVCS backing SQLite and this very forum — are useful, too. Moreover, it catalogs the pains we've taken to reduce the use of it and to provide sensible fallbacks where practical.

You're welcome to disagree with individual elements of this on a technical basis, but to dismiss an entire technology the way you've done here is, frankly, unhinged from reality.

use plain links

Are you paying the bandwidth bill for robots to repeatedly download multimegabyte blobs as fast as possible?

The /src links on the page one click away via the links at the bottom of that page are similarly protected since they can cost the public SQLite servers arbitrary CPU time, not just bandwidth. If you let robots traverse the /timeline and /info trees on a Fossil repository without restriction, they'll repeatedly download the entire history of the project, with each version downloaded requiring an expensive tar+gz or zip operation.

Fossil has a cache to cope with this to some extent, but with so many versions in these projects' histories now, any reasonably-sized cache would be busted by allowing robots to run wild through the hyperlink tree. The cache would churn without end.

nuke that sucker from orbit.

Easier said than done, particularly when you're not even on the list of people potentially tasked with doing the doing.

Javascript is evil

Evil is in actions, not in things. Nouns cannot be evil; only particular uses of those nouns can be evil.

Javascript is the root of all evil.

All evil began in 1995?

Badly written javascript is responsible for 99.999% of all safety and security issues

That's not what the data show. The language topping that list is the one SQLite and Fossil are written in, by about four times the number of recorded incidents.

And even C isn't responsible for more than half. It holds a plurality only because there are many inherently-dangerous programming languages, several of which are quite popular.

there has never existed goodly-written-javascript in the entire history of the universe

You've got at least three of the authors of Fossil's JavaScript here in the thread. Your claim is that none of us have written any good JavaScript, either?

it was barfed-up by a moron.

I'm going to be charitable and assume you're using that term in the obsolete technical sense. You are objectively wrong on this point as well.

If you're allowing your technical definitions — and who better than one so pedantic as yourself to insist on precise use of technical words? — to expand to the point that one so objectively successful as Brendan Eich qualifies as a moron from an evaluative psychology standpoint, virtually everyone on the planet is also a moron. To take a position disregarding the value of most of the planet's population is to disconnect from society.

And by the tone and content of this post, you've also disconnected from polite society even among those you consider non-morons.

00:01 Reply: Javascript enforcement (artifact: d7684d5077 user: wyoung)

useless Javascript.

Other posts in this very thread explain why this particular bit of JavaScript is useful. The document Stephan linked you to explains why all of all the other bits of JavaScript in Fossil — the DVCS backing SQLite and this very forum — are useful, too. Moreover, it catalogs the pains we've taken to reduce the use of it and to provide sensible fallbacks where practical.

You're welcome to disagree with individual elements of this on a technical basis, but to dismiss an entire technology the way you've done here is, frankly, unhinged from reality.

use plain links

Are you paying the bandwidth bill for robots to repeatedly download multimegabyte blobs as fast as possible?

The /src links on the page one click away via the links at the bottom of that page are similarly protected since they can cost the public SQLite servers arbitrary CPU time, not just bandwidth. If you let robots traverse the /timeline and /info trees on a Fossil repository without restriction, they'll repeatedly download the entire history of the project, with each version downloaded requiring an expensive tar or zip operation. Fossil has a cache to cope with this to some extent, but with so many versions in these projects history now, I can't imagine any reasonably-sized cache wouldn't be busted by allowing robots to run wild through the hyperlink tree. The cache would churn without end.

nuke that sucker from orbit.

Easier said than done, particularly when you're not even on the list of people potentially tasked with doing the doing.

Javascript is evil

Evil is in actions, not in things. Nouns cannot be evil; only particular uses of those nouns can be evil.

Javascript is the root of all evil.

All evil began in 1995?

Badly written javascript is responsible for 99.999% of all safety and security issues

That's not what the data show. The language topping that list is the one SQLite and Fossil are written in, by about four times the number of recorded incidents.

And even C isn't responsible for more than half. It holds a plurality only because there are many inherently-dangerous programming languages, several of which are quite popular.

there has never existed goodly-written-javascript in the entire history of the universe

You've got at least three of the authors of Fossil's JavaScript here in the thread. Your claim is that none of us have written any good JavaScript, either?

it was barfed-up by a moron.

I'm going to be charitable and assume you're using that term in the obsolete technical sense. If so, you are objectively wrong on this point as well.

If you're allowing your technical definitions — and who better than one so pedantic as yourself to insist on precise use of technical words? — to expand to the point that one so objectively successful as Brendan Eich qualifies as a moron from an evaluative psychology standpoint, virtually everyone on the planet is also a moron. To take a position disregarding the value of most of the planet's population, you've disconnected from society.

And by the tone and content of this post, you've also disconnected from polite society even among those you consider non-morons.

2021-09-14
17:33 Edit reply: segmentation fault when closing a database from within a transaction (artifact: 525878980e user: wyoung)

I'd expect transactions left open on DB conn close to be rolled back, having not received the "COMMIT" call which can now never come by the very fact of the DB conn being closed now.

If closing the DB conn auto-commits all still-open transactions under the Tcl SQLite binding, I'd call that a bug, not a feature. The semantic meaning of transactions is that those not explicitly committed get rolled back, but once again, that can't happen with the conn closed.

16:59 Reply: segmentation fault when closing a database from within a transaction (artifact: 7b66e32abb user: wyoung)

I'd expect transactions left open on close to be rolled back, having not received the "COMMIT" call. If that doesn't happen in this Tcl case, I'd call that a bug, most likely in the Tcl language binding.

14:15 Reply: segmentation fault when closing a database from within a transaction (artifact: e672c5746c user: wyoung)

Okay, yes, good, fix the crash.

...but what did you expect it to do? This code says "close the database and then finalize the transaction." You can't finalize something via a closed conn.

More ↓