SQLite Forum

Feature request: pragma for changing the precision / format of CURRENT_TIMESTAMP to YYYY-MM-SSTHH:MM:SS.SSS
Login

Feature request: pragma for changing the precision / format of CURRENT_TIMESTAMP to YYYY-MM-SSTHH:MM:SS.SSS

(1) By maassg on 2021-03-06 10:30:45 [link]

create table TEST (
  mtime timestamp **default CURRENT_TIMESTAMP**
);

or select CURRENT_TIMESTAMP;

currently always returns datetime value as "**YYYY-MM-DD HH:MM:SS**"

but **I would like** it like this: "**YYYY-MM-DDTHH:MM:SS.SSS**"

the **workaround "strftime( '%Y-%m-%dT%H:%M:%fZ', 'now' )" is a bit clumsy**:

create table TEST (
  mtime timestamp **default( strftime( '%Y-%m-%dT%H:%M:%fZ', 'now' ) )**
);

or select strftime( '%Y-%m-%dT%H:%M:%fZ', 'now' );

thanks a lot with best regards

maassg

(2) By Ryan Smith (cuz) on 2021-03-06 12:53:03 in reply to 1 [link]

> the workaround "strftime( '%Y-%m-%dT%H:%M:%fZ', 'now' )" is a bit clumsy

And also very correct and the best way to do it. That's why those functions exist.

Another option is making a UDF which you can have print the current time in any way you love. The obvious downside is that any SQLite connection wihtout your UDF added will not work, so you lose general SQL compatibility, but you have beautiful lean code. Now *that* is worthy of the "clumsy" moniker.

(3) By maassg on 2021-03-13 14:58:23 in reply to 2 [link]

Hello Ryan,
you got me wrong. Let me clarify my argumentation.

Not the function **strftime itself** is clumsy, but the **SQL interface** is

when the user has to use it instead of **CURRENT_TIMESTAMP**.

That's why CURRENT_TIMESTAMP exists

and it should give you a timestamp with maximum possible precision.

This is how it works in all major sql database systems.

best regards

(4) By Stephan (stephancb) on 2021-03-14 08:21:48 in reply to 3 [link]

The accuracy and precision of timestamps depends on the hardware and how the computer is connected to the reference time, irrespective of what the database system prints. At the home office a PC with configured NTP via WAN/internet the seconds are probably reliable, but not 100 ms. YYYY-MM-DD HH:MM:SS would be adequate. Higher accuracy is possible within a LAN where an accurate NTP server is running (universities, ...), in a mobile phone having GPS and an OS syncing to the system clock to the GPS reference, or with other special arrangements for time synchronization.

Even if a database system prints CURRENT_TIMESTAMP as YYYY-MM-DDTHH:MM:SS.SSS, this is not guaranteed to be a timestamp with millisecond accuracy and actually unlikely to be the case.

(5) By John Dennis (jdennis) on 2021-03-14 09:46:16 in reply to 4 [link]

> Even if a database system prints CURRENT_TIMESTAMP as YYYY-MM-DDTHH:MM:SS.SSS, this is not guaranteed to be a timestamp with millisecond accuracy and actually unlikely to be the case.

I understand the inaccuracy, but would the difference between successive CURRENT_TIMESTAMPs be correct, if used to determine the elapsed time to perform a function?

(6) By Larry Brasfield (larrybr) on 2021-03-14 10:28:37 in reply to 5 [link]

It would be an act of faith to believe any of the last 2 ms digits, even on an incremental basis, without good evidence that they are unaffected by process scheduling.

Anymore, when people get serious about measuring elapsed time accurately, they arrange to read the high-resolution timer hardware available in most CPUs.

An extension function which read that hardware, (perhaps using \<chrono\> in C++), would be a better bet than trying to push SQLite's timestamp facility beyond what it is designed to do and do so across multiple platforms.

(7) By John Dennis (jdennis) on 2021-03-14 10:54:34 in reply to 6 [link]

fair enough...

(8) By Stephan (stephancb) on 2021-03-14 11:19:56 in reply to 5 [link]

Yes, the precision of calls to the system clock is typically significantly better than milliseconds, and the "maximum possible precision" would be even more digits than `YYYY-MM-SSTHH:MM:SS.SSS` .  Details depend of course on the hardware. 

A format like YYYY-MM-SSTHH:MM:SS.SSS suggests to me as a physicist that it is an absolute time value, and I expect the number of digits to roughly represent the accuracy of this value. For time differences (where precision rather than accuracy is the issue) it is a rather clumsy format. For example, in Sqlite `julianday('now')` would be suitable for differences. But I guess that this is even less standard than CURRENT_TIMESTAMP across different database systems.

(9) By Larry Brasfield (larrybr) on 2021-03-14 19:10:17 in reply to 8 [link]

> A format like YYYY-MM-SSTHH:MM:SS.SSS suggests to me as a physicist that it is an absolute time value, and I expect the number of digits to roughly represent the accuracy of this value.

As an amateur student of human nature, I would expect the number of expressed digits to allow precision somewhat in excess of "accuracy" only if the expresser is trained in, and/or respects the traditions of, the sciences where meaningless precision is avoided and eschewed. In this context, there is little reason to have such expectation. And, as is brought out elsewhere in this thread, "accuracy" can mean either "incremental precision" or "absolute accuracy", at the library user's option.

This "precision" or "accuracy" ambiguity seems like good enough reason for the SQLite library's julianday('now') to express whole seconds. Anybody who wants more precision at least is nudged to ask: Is there really any more meaningfully available?

(10) By David Jones (vman59) on 2021-03-14 19:45:13 in reply to 9

How would you express  1/86400th of a day in binary form, or even 1/24th of a day?

(11) By Tim Streater (Clothears) on 2021-03-14 19:56:20 in reply to 9 [link]

Accuracy and precision have different but precise meanings where scientific measurement is concerned. Accuracy should be specified by  its +/- limits, such as 3.3 +/- 0.1, that is, with an error figure.

Precision indicates how many significant figures (digits) a value is specified to. So, two in the above example. If the error is in the second digit, there is not much point in specifying greater precision. Thus, 3.31234 +/- 0.111 adds no utility and no extra information to the result.

Personally if I'm doing timing measurements, I'd ask the OS directly rather than expect SQLite to help.

(12) By Larry Brasfield (larrybr) on 2021-03-14 19:58:04 in reply to 10 [link]

I suppose it would be necessary to find a "binary point", similar to the "decimal point in effect.

My opinion about "whole seconds" is not because they are "whole". It is because there is not much more accuracy to be routinely had; the error is such that, even with <u>[NTP](https://en.wikipedia.org/wiki/Network_Time_Protocol)</u> in use, the first digit after the (seconds) decimal can be wrong sometimes.

(13) By Keith Medcalf (kmedcalf) on 2021-03-14 20:49:38 in reply to 12 [link]

Rounding should never be applied to computational intermediates but rather only the result.  

If the "first difference after the seconds" can be incorrect sometimes, that would also necessarily imply that the first digit of the millennium may be incorrect sometimes.

It is all a matter of perspective.