SQLite Forum


50 most recent events by user gunter_hick occurring on or after 2021-11-23 15:51:28.

15:41 Reply: WAL with Read Only Access artifact: 806e8e5e90 user: gunter_hick
No. Even readers have to record their "end marks" and these marks need to be in place just in case a writer comes along.
15:39 Reply: WAL with Read Only Access artifact: 9200ca36c6 user: gunter_hick
For SQLite databases on RO media, converting to journal_mode=DELETE is the recommended method.
14:07 Reply: WAL with Read Only Access artifact: 8debe446d2 user: gunter_hick
In WAL mode, a reader needs to record a mark in the WAL file corresponding to the last transaction that committed before the read transaction starts. This is to prevent pages being writen back into the database file that were changed in a later write transaction. Obviously, a WAL file is required to do that.

One of the conditions for deleting the WAL file is that all pages must have been written back to the database file. A read only connection is not allowed to do that.

The WAL and shm files need to be on the same directory as the database file to prevent anomalies during creation/deletion.
15:38 Reply: Database locked? artifact: a7ba633f43 user: gunter_hick

Your are not handling your transactions properly. Make sure to properly commit (or rollback) any transactions. Maybe your "Python SQLite driver" is second guessing (wrongly) when to begin and end transactions.

15:33 Reply: Regarding Application Defined Page Cache. artifact: f31868293d user: gunter_hick
There is no speed to be gained here. The ADPC is just a method of organising the memory used to hold pages from a database file. SQLite still has to read from and write to the database file. SQLite does a great job of caching pages with the built in page cache.
16:00 Reply: Dispatch Sqlite Instance with event loop artifact: 9be4804932 user: gunter_hick
If you were to open a connection for each request and each request is completed by returning zero or more rows of data, you would do

1) connect to DB
2) prepare statement
3) bind parameters (if the statement has parameters)
4) step until SQLITE_DONE (or you have all the answers you want)
5) reset statement (if you did not step to completion)
6) finalize statement
7) disconnect

Note that there is no BEGIN and no COMMIT; SQLite implicitly does a BEGIN on the first step and a COMMIT when SQLITE_DONE is returned or the statement is reset. In a request that needs to execute more than one statement while seeing a consistent state, execute BEGIN before the first step and COMMIT after the last reset.

Larry suggested doing steps 2-6 within the work thread. That would allow the "return connection to pool" function to check for and finalize any statements that an erroneous work thread may have left open, which would result in maximum safety.

Preparing commonly used statements during pool initialization would require keeping the statement handles together with the connection handle as an element of the pool. The "return to pool" function would then reset "known statements" (an extra reset does not do any harm) and finalize unknown statements, implicitly ending any transaction.
15:31 Reply: pragma table_info Bug artifact: 2dcab1adf9 user: gunter_hick
Perhaps you should check your schema creation code. You have square brackets in all type names with embedded space and none in type names without embedded space.

Maybe you would like to explain why you choose to copy type declarations verbatim from your calling programming language over to automatically generated schema definitions. SQlite only rudimentarily parses them to determine affinity.

INTEGER UNSIGNED translates into INTEGER affinity, whereas UNSIGNED WORD(1) translates to NUMERIC affinity. Both may hold negative values. If you really want to limit UNSIGNED WORD to 0..65535 then you should generate an appropriate check constraint too.

BTW: Your schema contains multiple fields whose names differ only by a numeric suffix. This begs the question as to why the schema is not normalized.
13:53 Reply: Dispatch Sqlite Instance with event loop artifact: ef5dc36865 user: gunter_hick
SQLite connections in multithread mode can be safely used by different threads inside the same process, just as long as no connection is attempted to be used simultaneously from two distinct threads.

There should be no problem as long as you are sure to return connections into the pool in the same state they were taken from the pool. Typically this would be "no open transaction". You could still prepare commonly used statements on each connection when creating that connection and finalize them during app shutdown. Just be careful to always run them to completion in the worker threads.
13:15 Reply: Dispatch Sqlite Instance with event loop artifact: cd716bc5c9 user: gunter_hick
Creating a new connection for each request will probably be slow (SQLite needs to parse the schema when creating a connection).

Having one global connection and running multiple threads against it will cause problems with transactions (no thread can be sure when a transaction begins and when it ends) and with concurrency.

Using a dedicated connection for each thread will probably give the best results.

How to handle thread local data is a question best handled in a forum dedicated to the programming language you are using.
18:18 Reply: select valid lat/lng from columns typed as text artifact: 602cd982f4 user: gunter_hick
It seems you are attempting to solve a presentation layer problem in the data storage layer. I would suggest making lat and long REAL and adding appropriate check constraints. Deal with input and output formats in the application.
13:48 Reply: strings separated by ";" convert to columns artifact: dc8d1433df user: gunter_hick
There is no such function in SQLite.

You could probably devise a CTE to digest a string into parts, or you could implement a table valued function to do so.
14:46 Reply: How SQLite database can be survived from a power failure during extending the page count ? artifact: ff6f864af3 user: gunter_hick
You can use the truncate or the fallocate commands to preallocate blocks to a file, if this is supported by the file system on the device. Create the file with SQLite first, then adjust the size. SQLite will extend the file if it runs out of space.
14:27 Reply: Serious error emptied my complete database! artifact: 406d0f7bbb user: gunter_hick
AFAICT SQLite Expert has it's own support forum at http://www.sqliteexpert.com/forum.

I suspect that whatever happened occured inside SQLite expert after you dismissed the error message, causing it to update the schema while deleting the data.
06:53 Reply: How SQLite database can be survived from a power failure during extending the page count ? artifact: 12254a92d0 user: gunter_hick
SQLite will fsync() the directory whenever it creates or deletes a file or the file size changes.

quote 'SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed. We are told that the flush and fsync primitives are broken on some versions of Windows and Linux. This is unfortunate. It opens SQLite up to the possibility of database corruption following a power loss in the middle of a commit. However, there is nothing that SQLite can do to test for or remedy the situation. SQLite assumes that the operating system that it is running on works as advertised. If that is not quite the case, well then hopefully you will not lose power too often.' unquote
07:06 Reply: Sqlite3 can happend deadlock? artifact: c529b22711 user: gunter_hick
Version 3.7.17 is extremely old, current is 3.37.

SQLite does not support multiple concurrent writers, they will have to take turns. If a writer does not complete it's transaction, it will stall all other processes, including readers. Similarly, a reader that does not complete it's transaction will stall any writer. Eventually, depending in your setting of the busy timeout, SQLite will return SQLITE_BUSY (indicating that it could not acquire the necessary lock) or SQLITE_DEADLOCK (indicating that SQLite already knows it can never acquire the necessary lock).

I later releases of SQLite, there is WAL mode that allows multiple readers to proceed even if a single writer is making changes; writers still need to take turns. This may help, but since details of what you are doing remain unclear, there is no guarantee.

Most probably you are mishandling the library.
06:34 Reply: How to skip intermediate data of DB in file? artifact: b53d78edcd user: gunter_hick
You need to call free() on any memory you allocated by calling malloc/calloc.

You need to call sqlite3_free() on any memory you allocated by calling sqlite3_malloc() et. al. and any memory passed back from SQLite to your control, like the errmsg parameter.

You need to call sqlite3_finalize() when you are finished using a statement prepared by calling sqlite3_prepare().
13:25 Reply: SEE release history artifact: 151cdd3c96 user: gunter_hick
It's a special version of SQLite that encrypts the database file to prevent prying eyes from dumping its contents via the published file format and "regular" SQLite library from accessing it
13:22 Reply: free(): corrupted unsorted chunks artifact: 123c371544 user: gunter_hick
valgrind plus its configuration options are prepended to the command line used to run the image to be checked. I'm afraid you will have to modify the makefile recipe for the failing subtarget of target "check" accordingly.

This type of error typically occurs when
- a write operation overruns the allocated space
- a write operation occurs through a dangling pointer, i.e. after the allocated space has been freed
- a write operation occurs through an uninitialized pointer (that resides on the heap and contains whatever garbage was left behind)
- a write operation occurs on an array element outside the allocated space (i.e. an invalid/uninitialized index is used)

Typically, the location where the error is detected is miles away from where the corruption occurred.
12:25 Reply: free(): corrupted unsorted chunks artifact: e377148b51 user: gunter_hick
Have you tried running it under valgrind?
10:03 Reply: How to skip intermediate data of DB in file? artifact: ccab67fc3c user: gunter_hick
There are two issues here:

Transactions: By default, SQLite uses automatic transactions, i.e. each statement is run in its own transaction. This will be slow for initial loads, as data needs to be committed to the database after each insert. Enclosing multiple INSERT statements between BEGIN and COMMIT allows SQLite to perform multiple changes of the same set of pages before writing them to disk. This will run much faster.

Memory Managment: SQLite does require some memory to operate (larger transactions require more memory) and memory that is freed is generally not returned to the OS because most useful processes expect to be using that memory again quite soon - like processing the next transaction. A well written process will tend to grow its memory to the point where new processing can be run by reuse of freed memory without needing "new" memory.

A well written application using sensible sized transactions will show an initial increase in memory requirement and then remain stable.

If you are observing linear increase im memory requirement, then your application is probably allocating memory for each new statement and never releasing it. This is called a memory leak and is 99,99% the fault of your application, as SQLite is very carefully tested to not cause memory leaks.

Unfortunately you are not even sharing which version of SQLite you are using, let alone which API routines your are calling from which language nor any of the statements. This severely limits the amount of help you will get.
07:50 Reply: Error: stepping, unable to open database file (14) artifact: 673fb0311d user: gunter_hick
SQLITE_CANTOPEN is returned mainly in the OS abstraction layer (meaning something with file/directory access permissions is probably broken) and in the Pager and the WAL code.

Apart from that the only thing that comes to mind with views is the not so recent limitation that views may only reference tables residing in the same file.

E.g. CREATE VIEW a.myview ... AS SELECT ... FROM b.mytable ...

is only valid if database b is attached before database a, making database a "unopenable" alone (or after mytable is dropped).
07:40 Reply: Table-valued function encoding artifact: b49cb310a7 user: gunter_hick
AFAICT a preferred encoding can only be used when creating regular user defined functions. E.g. if there are different implementations for different encodings.
16:12 Reply: In perspective of disk usage, is the sizeof 'a' < INT ? artifact: 6d29f08018 user: gunter_hick
See https://sqlite.org/fileformat.html for a detailed description of row format.

A short text will have 1 byte overhead and does not store the terminator, so 'audio' takes up 6 bytes.

Integers need 1 to 9 bytes depending on the value, so yes, somtimes a large value integer takes up more storage than a short string.

If a column is designed to only take on a member of a set of distinct values, then it is usually a good idea to store them in a dictionary of sorts and refer to them by number.
13:16 Reply: Howto group most similar values artifact: a5871a14e9 user: gunter_hick
My point is that the statement yields a list of "candidate clusters" which can - or even cannot - be combined in different ways to yield valid (i.e. each original point is in exactly one cluster) answers to the original questions.

(1,2,3,4,7),(5),(6) has the cluster with the most members

(1,4,7),(2,3),(5),(6) has the "tightest spread"

(1,2,4,7),(3),(5),(6) has the most common cluster (found twice), but forces id 3 (value 22.16) to be in it's own cluster which isn't even on the candidate list. Sort of like getting ostracized in ancient greece - most voters want you out.
11:46 Edit reply: Howto group most similar values artifact: 54ae4fd66e user: gunter_hick
Can you improve on this Ryan?

insert into sampled(value) values (20.1),(21.106),(22.16),(20.4),(25.1),(100.2),(20.2);

select a.id,a.value,count(),avg(b.value),min(b.value), max(b.value),group_concat(b.id) from sampled a join sampled b on (abs(b.value-a.value)/nullif(a.value,0)) < 0.05 group by a.id order by 3 desc;
id          value  count()  avg(b.value)  min(b.value)  max(b.value)  group_concat(b.id)
----------  -----  -------  ------------  ------------  ------------  ------------------

2           21.10  5        20.7932       20.1          22.16         1,2,3,4,7

4           20.4   4        20.4515       20.1          21.106        1,2,4,7

7           20.2   4        20.4515       20.1          21.106        1,2,4,7

1           20.1   3        20.233333333  20.1          20.4          1,4,7

3           22.16  2        21.633        21.106        22.16         2,3

5           25.1   1        25.1          25.1          25.1          5

6           100.2  1        100.2         100.2         100.2         6
11:45 Reply: Howto group most similar values artifact: 44e63c803f user: gunter_hick

Can you improve on this Ryan?

insert into sampled(value) values (20.1),(21.106),(22.16),(20.4),(25.1),(100.2),(20.2);

select a.id,a.value,count(),avg(b.value),min(b.value), max(b.value),group_concat(b.id) from sampled a join sampled b on (abs(b.value-a.value)/nullif(a.value,0)) < 0.05 group by a.id order by 3 desc; id value count() avg(b.value) min(b.value) max(b.value) group_concat(b.id)

2 21.10 5 20.7932 20.1 22.16 1,2,3,4,7 4 20.4 4 20.4515 20.1 21.106 1,2,4,7 7 20.2 4 20.4515 20.1 21.106 1,2,4,7 1 20.1 3 20.233333333 20.1 20.4 1,4,7 3 22.16 2 21.633 21.106 22.16 2,3 5 25.1 1 25.1 25.1 25.1 5 6 100.2 1 100.2 100.2 100.2 6

10:27 Reply: Howto group most similar values artifact: fbf7edf139 user: gunter_hick
The concept of "similar" is ill defined.

asql> with val(x) as (values (20.1) , (21.106), (22.16)) select x*0.95 lower_limit,x*1.0 middle ,x*1.05 upper_limit from val;
lower_limit  middle  upper_limit
-----------  ------  -----------
19.095       20.1    21.105
20.0507      21.106  22.1613
21.052       22.16   23.268

20.1 thinks it should be alone in one group
21.106 thinks they all belong together
22.16 thinks 20.1 should be left out.

Even if changed to mean "the smaller value is at least 95% of the larger value" the relation "similar" is not transitive (a similar b AND b similar c does not imply a similar c).
09:52 Reply: Faster way to insert into WITHOUT ROWID table? artifact: 930f4ff990 user: gunter_hick
ad 1) CREATE TABLE AS SELECT guarantees that the target table is empty. INSERT INTO SELECT FROM is a common pattern for "imported transactions" (e.g from a csv file in an external format) that need working on to integrate into the schema.

ad 2) Any benefit ist limited to target tables that have exactly one index, be it the PRIMARY KEY of a WITHOUT ROWID table, the sole UNIQUE constraint of a ROWID table or the single UNIQUE INDEX on a ROWID table; any additional index will be unordered load

ad 3) I don't think INSERT INTO SELECT FROM ORDER BY is a common figure because SQL is based on (unordered) sets as opposed to (ordered) permutations and set operations should be independent of the implementation dependent order of processing. Also, since the question is "initially populating from external source", the source table will have to be a virtual table that knows and reports (via the xBestIndex method) the pre-existing order of the records
15:44 Reply: Different behavior between tables with and without rowid artifact: 57971331c6 user: gunter_hick
Like I said, don't saw off the branch you are sitting on.

It is always the PRIMARY KEY which is affected, either the ROWID in ROWID tables or the PRIMARY KEY in WITHOUT ROWID tables.

Statements whose query plan includes "use temp BTree for ORDER BY" are not affected, because all the rows have been located before tha first one is returned by sqlite3_step().
14:14 Reply: Different behavior between tables with and without rowid artifact: dada1a9289 user: gunter_hick
It would help if you could show the schema and the SQL of the queries to be sure, but what is happening is an old problem.

Modifying the tree you are traversing disrupts the notion of "next record" and may cause you to process some records once, more than once or not at all, depending on the exact changes.

Either reformulate your query to be independant of visitation order (this may require putting the "vector" into a table instead of memory), or store changes and apply them when you have seen all records.
11:22 Reply: In a trigger insert all columns of NEW into another table artifact: 7d8d5c4ce3 user: gunter_hick
Yes, maybe you should take out a patent on your idea.

Being rude under the cover of anonimity is not likely to attract helpful responses.
08:08 Reply: In a trigger insert all columns of NEW into another table artifact: 0765981609 user: gunter_hick
NEW is not a table, it is an alias for the row being inserted.

INSERT INTO backup VALUES (NEW.field,...);

And be carefuly to modifiy the structures of both tables and the VALUES list on every change.

Alternatively, this may work, but with an extra scan:

INSERT INTO backup SELECT * FROM table WHERE table.rowid = NEW.rowid;

Or replace the WHERE clause by a complete comparison of the PRIMARY KEY on a WITHOUT ROWID table.

NOTE: Omitting the WHERE clase will cause ALL of the records to be copied for EACH insert, which is maybe not what is desired.
08:53 Reply: Faster way to insert into WITHOUT ROWID table? artifact: 6512fc5455 user: gunter_hick

OK, from your example code it looked like c and d are computed using the a and b array entries, which potentially may be a "CPU cycle sink".

08:32 Reply: Faster way to insert into WITHOUT ROWID table? artifact: 619db22f02 user: gunter_hick
Can you compute_result() outside the INSERT loop? Maybe you are actually CPU bound computing the values to insert. Or at least run with a tool that gives you percentages of runtime spent in SQLite code and your code (like gcov and gprof on linux).
07:34 Reply: Naminng indices during CREATE TABLE artifact: 40f767ef44 user: gunter_hick
These are not "the same index".

CREATE TABLE ( .. UNIQUE(..) ) is a table constraint that SQLite chooses to implement via an automatically created and named index; the side effect is that there is an index that can be used to satisfy specific GROUP BY/ORDER BY clauses. There is no way to insert a duplicate record into the table.

CREATE UNQIUE INDEX() is an index. If the underlying table contains duplicate records, the creation of the index will fail. As long as the index exists, attempts to insert a "duplicate" record will fail, just as with the constraint - but it is an effect of the index, not a constraint on the table.

It is perfectly legal to have a UNIQUE constraint and a UNIQUE INDEX on the same table - indeed if you are regularly sorting by a and b, it may make sense to have UNIQUE(a,b) on the table and UNIQUE INDEX ON (b,a).

Why do you think you need an explicit name for an automatic index?
07:06 Reply: group_concat - ORDER BY parameter please. artifact: 020892b12a user: gunter_hick

See also https://sqlite.org/quirks.html#aggregate_queries_can_contain_non_aggregate_result_columns_that_are_not_in_the_group_by_clause

You are grouping by the empty set, so the result is exactly 1 row, ordered by the arbitrarily* selected value of non-aggregate column x.

It is only if you have exactly one aggregate function, and that function is either min() or max(), that any non-aggregate columns are guaranteed to come from (any one of) the row(s) that contain the max/min value within their group.

*arbitrarily: depending on the visitation order (which may change if the query plan changes) and purposefully left unspecified, to preclude inferring any guarantee of stability

07:29 Reply: Sqlite high IO load makes usage impossable artifact: 4e677ff24e user: gunter_hick
Do you see one major drawback of having 60 identical tables with different names now? 

You need to prepare 60 statements just to "SELECT value FROM table WHERE key = ?" and another 60 to "UPDATE table SET value = ? WHERE key = ?" and maybe even keep track of which you have and have not yet prepared. And do this for every connection.

Unless your threads only ever read/write a distinst set of tables, and no two threads access the same table ever. That would be a use case for a separate SQLite db file for each thread, which would eliminate any contention.
07:16 Reply: Sqlite high IO load makes usage impossable artifact: 37039273a3 user: gunter_hick
Sorry for the following remark, but jumping out of an airplane with a defective parachute also works relatively fine. Until you run out of airspace.
07:02 Reply: Sqlite high IO load makes usage impossable artifact: a132dfe5ed user: gunter_hick
If I understand you correctly, your KV pairs are relational in that they form a graph, with the connections being implicit in the values. And sometimes the graph is changed.

In that case it would be imperative to make "reading a graph" a transaction, to avoid som eother thread changing the graph while you are in the middle of reading it - which could lead to reading an invalid state.

<read graph>;

It would also be imperative to make "changing a graph" a transaction, lest concurrent changes completely mess up each others changes

<read graph>;
<compute necessary changes inside app code>
<update graph>;

Now SQLite will handle syncing properly, allowing just one writer in sequence to make changes, and isolating readers from seeing transient invalid states.

It strikes me as a very roundabout way to abstract relationships into KV pairs and then painstakingly rebuilding them instead of having "nodes" and "connections" that can be traversed with CTEs in SQLite doing the heavy lifting.

But then again, you have chosen to share problems with your chosen implementation instead of sharing the original problem.
15:34 Reply: Sqlite high IO load makes usage impossable artifact: a4319376a8 user: gunter_hick
Attempting to read your code, it seems you are also preparing the statements anew for each execution. This adds substantial overhead, as the SQLite byte code is re-generated and the schema data re-parsed for each statement execution.
15:18 Reply: Sqlite high IO load makes usage impossable artifact: 0307a8cd9b user: gunter_hick
Can you isolate the syscalls that result from just one select?
15:10 Reply: Sqlite high IO load makes usage impossable artifact: b469e04ec5 user: gunter_hick
What is the difference between those 60 tables? Having multiple tables with identical schemata und just differing in names is often indicative of an improper database design that abuses metadata as data.

Using a relational database as a key/value store for unrelated data may not be the best design decision.
15:06 Reply: Sqlite high IO load makes usage impossable artifact: d9483e80ca user: gunter_hick
We don't use WAL mode here because we have only virtual tables in the SQLite db file, so I am no expert on this, but here goes anyway:

A read transaction on a WAL mode db file needs to record an "end mark" to prevent a checkpoint from transferring changed pages to the main db file that the reader should not (yet) be seeing. Since you do not mention transactions, I am guessing that what you are seeing is "end marks" constantly being added and removed in the WAL file. This would be reduced if you would read more than one value in one transaction.

You have not mentioned writes. Depending on how frequent and how lengthy writes are, it may be very much faster to use journal mode instead.
12:48 Reply: How to rerank one row in a big set of rows? artifact: 00f1c12bd9 user: gunter_hick
In many use cases, the rank is implied by comparing some kind of score (e.g. elo points in chess) and updating a records' score only needs to rebalance the index on that score to faciliate returning records ordered by the score (plus any criteria to tie-break equal scores).

What makes it necessary to quickly extract the 4711th record, ond only that one?
12:33 Reply: Sqlite high IO load makes usage impossable artifact: df7f4c6d4f user: gunter_hick
What kind of "many small requests in a row"?

You show no schema, no SQL and neglect to mention even the SQLite release.

I can only guess that you have just 1 table and that you are updating 1 value at a time using autocommit, so each update has to do its own commit, which is costly.
15:17 Reply: Sqlite3 doesn't support row level locking? artifact: 663e0c11a1 user: gunter_hick
If it is OK to lose cache contents on thread termination (and it is not necesary to share information between threads), then consider using one connection per thread and the special filenames ":memory:" for a private, in memory database, or "" (empty string) for a private, on disk database. You will have to create the table(s) after opening.

That way, each thread has its own database that magically disappears when the thread terminates. This is much less of a hassle than attempting to keep track of a flea bag full of disk files.
11:52 Reply: DESC keyword is ignored artifact: e1033dab18 user: gunter_hick

From the docs:

"Each ORDER BY expression is processed as follows:

If the ORDER BY expression is a constant integer K then the expression is considered an alias for the K-th column of the result set (columns are numbered from left to right starting with 1).

If the ORDER BY expression is an identifier that corresponds to the alias of one of the output columns, then the expression is considered an alias for that column.

Otherwise, if the ORDER BY expression is any other expression, it is evaluated and the returned value used to order the output rows..."

and here:

"The SQL standard requires double-quotes around identifiers and single-quotes around string literals. ... But, in an effort to be compatible with MySQL 3.x ... SQLite will also interpret a double-quotes string as string literal if it does not match any valid identifier.

This misfeature means that a misspelled double-quoted identifier will be interpreted as a string literal, rather than generating an error. It also lures developers who are new to the SQL language into the bad habit of using double-quoted string literals when they really need to learn to use the correct single-quoted string literal form."

11:36 Reply: sqlite3_bind_parameter_count question artifact: 599ba9595a user: gunter_hick

Prepare each statement separately.

07:08 Reply: How to free max heap size on execution of sqlite query using sqlite3_exec? artifact: 01948cae98 user: gunter_hick
Since you seem to have full c++ support, try compiling your application for a linux host. Then run it using valgrind to detect and display memory leaks. Once you have fixed those, see what it does on the target device.

I don't do c++, but from what you have shown your code does seem awfully verbose.

I also suggest you use NOT NULL DEFAULT clauses in your schema to avoid cluttering up your queries with IFNULL calls.
15:51 Reply: How to free max heap size on execution of sqlite query using sqlite3_exec? artifact: b74cacb378 user: gunter_hick

Please do not edit your posts. It makes things much more complicated

More ↓