storing number with 2 decimal place
how to make sure sqlite3 store number as 0.00 instead or 0.0 as always. tq for helping.
Save it as string.
create table t(s); insert into t values(printf('%.2f',0)); select * from t;
What is wrong with storing 0.0 ? One can also get the two decimals when doing the query:
insert into t values(5.0/3); sqlite> select * from t; 0.00 1.66666666666667 sqlite> select printf('%.2f',s) from t; 0.00 1.67 sqlite>
Nothing wrong with it. But the OP asked how to store that way.
(11) By Larry Brasfield (LarryBrasfield) on 2020-06-14 20:13:37 in reply to 10 [link] [source]
Yes, that was the OP's request. But we do problem posers no great favor, when they appear to be laboring under misconception(s), by unquestioningly answering only their bare questions when it is apparent they probably need to reframe the issues and reconsider what the problem really is. So, "What is wrong with storing 0.0?" is very apropos, whereas the suggestion that '0.00' be stored (as TEXT) has a good chance of mystifying somebody who can confuse 0.00 with 0.0 (which are floating point literals in this context.)
Why not store it as a number? Displaying to the user is not the business of the storage layer.
tq for your help. I did that and when display it just show with 1 digit decimal. Exmple I store 200.00 and when recall it just show 200.0. I am using python3 to store data in sqlite3 database and list display using tkinter TTK. I cross check with sqlite db browser also show 200.0! appreciate you help.
(6) By Larry Brasfield (LarryBrasfield) on 2020-06-14 12:02:32 in reply to 5 [link] [source]
Your immediate difficulty can be cured by using printf(...) to display the value stored in your database. See the link for details. Or, more precisely: Visit the linked web page and read the specified section.
In the longer term, you need to think about how 2.0 differs from 2.00, which is not a simple matter if you consider representation, manifestation, interpretation, rendering, and "reality" to be independent concepts. Think hard about that, do some study, and you should find these issues much easier to deal with.
If '2.00' is not identical to '2.0' and '2' then you're not handling numbers, you're handling strings. You have two easy options, both of which are computationally sound .
Do all your handling of whatever '2.00' is using string variables. That means declaring string variables in whatever programming language you're using, declaring a
TEXTcolumn in the SQLite database, etc..
Do all your handling of whatever '2.00' using integers, with the number multiplied by 100. That means declaring integer variables in whatever programming language you're using, declaring an
INTEGERcolumn in the SQLite database, and doing programming to print the number divided by 100 to two places when you want to print it. This is the normal way computers handle amounts of money with a one-hundredth base.
+1 for adding the markup for
'<code>' to the available Markup commands.
Do all your handling of whatever '2.00' using integers, with the number multiplied by 100. That means declaring integer variables in whatever programming language you're using, declaring an INTEGER column in the SQLite database, and doing programming to print the number divided by 100 to two places when you want to print it. This is the normal way computers handle amounts of money with a one-hundredth base.
This is a fallacy and useful only for addition and subtraction. Multiplication and division introduce other problems and need to be carried out using "exactly rounded" arithmetic. That means computing the result "exactly" and then "rounding" to the precision required. This cannot be done with integer arithmetic. You require scaled arithmetic (floating point) in order to be able to do that.
The most commonly available format for "scaled arithmetic" today is floating point, and the most common variant is base 2, usually called Binary Floating Point, though there are other bases available, such as base 10 and base 16. However base 2 has specific advantages over using other bases.
Some languages directly support fixed precision scaled types, for example COBOL, which carries out (with runtime library support) "exactly rounded" arithmetic without the necessity of the programmer knowing what they are doing.