(1) By anonymous on 2021-09-16 14:53:24 [link] [source]
Honestly? There's absolutely no need for, but that gives a pretty bad smell on the site like doing malicious stuff or spying out the visitors...
(2) By Larry Brasfield (larrybr) on 2021-09-16 15:13:05 in reply to 1 [link] [source]
I see nothing in that HTML which warrants suggestions of odor, malice, or snooping.
How did you get to that page?
(4) By RandomCoder on 2021-09-16 16:09:35 in reply to 2 [link] [source]
(3) By Stephan Beal (stephan) on 2021-09-16 15:13:46 in reply to 1 [link] [source]
... gives a pretty bad smell on the site like doing malicious stuff or spying out the visitors...
It's an anti-bot measure attempting to stop bots from sucking up the site's bandwidth and CPU. (Nowadays most crawlers can run JS, so it's less effective than it used to be.)
Please see this document about the topic in sqlite's sister project, fossil:
(5.1) Originally by Keith Medcalf (kmedcalf) with edits by Dan Kennedy (dan) on 2021-09-16 19:43:03 from 5.0 in reply to 3 [link] [source]
The correct way to achieve the objective is to use plain links and a robots.txt file in the root. If some crawler disregards the robots.txt file, you nuke that sucker from orbit.
(8.2) By Warren Young (wyoung) on 2021-09-17 03:12:04 edited from 8.1 in reply to 5.1 [link] [source]
You're welcome to disagree with individual elements of this on a technical basis, but to dismiss an entire technology the way you've done here is, frankly, unhinged from reality.
use plain links
Are you paying the bandwidth bill for robots to repeatedly download multimegabyte blobs as fast as possible?
/src links on the page one click away via the links at the bottom of that page are similarly protected since they can cost the public SQLite servers arbitrary CPU time, not just bandwidth. If you let robots traverse the
/info trees on a Fossil repository without restriction, they'll repeatedly download the entire history of the project, with each version downloaded requiring an expensive tar+gz or zip operation.
Fossil has a cache to cope with this to some extent, but with so many versions in these projects' histories now, any reasonably-sized cache would be busted by allowing robots to run wild through the hyperlink tree. The cache would churn without end.
nuke that sucker from orbit.
Easier said than done, particularly when you're not even on the list of people potentially tasked with doing the doing.
Evil is in actions, not in things. Nouns cannot be evil; only particular uses of those nouns can be evil.
All evil began in 1995?
And even C isn't responsible for more than half. It can only claim a plurality among the many other inherently-dangerous programming languages, since several of which are quite popular, preventing any language from taking a majority share of the blame.
it was barfed-up by a moron.
I'm going to be charitable and assume you're using that term in the obsolete technical sense. You are objectively wrong on this point as well.
If you're allowing your technical definitions — and who better than one so pedantic as yourself to insist on precise use of technical words? — to expand to the point that one so objectively successful as Brendan Eich qualifies as a moron from an evaluative psychology standpoint, virtually everyone on the planet is also a moron. To take a position disregarding the value of most of the planet's population is to disconnect from society.
And by the tone and content of this post, you've also disconnected from polite society even among those you consider non-morons.
(9) By Scott Robison (casaderobison) on 2021-09-17 00:28:55 in reply to 8.0 [link] [source]
(7.1) By Richard Hipp (drh) on 2021-09-16 20:20:25 edited from 7.0 in reply to 3 [link] [source]
It is still surprisingly effective, as most robots do not simulate mouse movements, and the hyperlinks typically do not appear until after the mouse moves.
Automatic robot blocking is useful, because it means I have to spend less time blocking robots and hence have more time available to do actual programming.
(10) By anonymous on 2021-09-17 05:28:29 in reply to 7.1 [link] [source]
That download page includes a description of looking in the page source for CSV if you want to easily find the URL (by using the view source command), so it isn't too bad. However, better would be a
An example of how the description in the
<noscript> block might be:
(13) By ThanksRyan on 2021-09-17 17:30:11 in reply to 10 [link] [source]
That isn't very good if the user is using the keyboard only.
You can also download from here: https://github.com/sqlite/sqlite/releases
Congratulations. That may mean the page you want to see for SQLite won't load.
An example of how the description in the <noscript> block might be:
(6) By anonymous on 2021-09-16 18:01:44 in reply to 1 [source]
(11) By anonymous on 2021-09-17 05:54:30 in reply to 6 [link] [source]
FWIW, there's a machine-readable table of available versions in the HTML comment of the page source, added at the request of a forum member (and the Fossil repo, of course).
Yes, and it is good.
With SQLite.org we can at least trust it, given that we trust the library itself to run on our machines.
While it is true, there is other considerations:
If you are using two separate computers to download and to run SQLite.
If some of the scripts do some things that are not wanted (e.g. animation), even if the other scripts are wanted. (This is why web browsers should have the option for script substitution, if the end user wants to substitute some scripts.)
(12) By Warren Young (wyoung) on 2021-09-17 06:01:14 in reply to 11 [link] [source]