SQLite Forum

Which performance test tool should choose ?
Login

Which performance test tool should choose ?

(1.1) By liziqi on 2021-06-28 13:48:14 edited from 1.0 [source]

Hi, I recently wanted to adjust the compile-time options of sqlite in our devices, but I am not sure whether this adjustment will cause performance degradation. I thought about using AndroidBench, but this software has not been updated for a long time, so I want to know, which test tool should I choose? What method do you use to test the performance of sqlite, I also want to learn it and test our sqlite performance before and after the adjustment, such as compile-time option SQLITE_OMIT_SHARED_CACHE.

(2) By Larry Brasfield (larrybr) on 2021-06-28 21:53:35 in reply to 1.1 [link] [source]

... I want to know, which test tool should I choose?

The CLI shell provides a timer, operation of which is controlled by the .timer command. That should be suitable for most users' purposes when evaluating query performance or the effect of compilation options.

... What method do you use to test the performance of [SQLite]?

Several methods are used, including measuring runtime as the CLI shell does. Some provide a much finer-grained measure than users normally need.

(3) By liziqi on 2021-06-29 04:05:42 in reply to 2 [link] [source]

Thanks for your reply.

1. it is stated that our devices provide platform capabilities. If CLI is used as a testing tool, there may be two problems. First, if it is only for a single scenario, the CLI is definitely okay. If it involves multiple scenarios, We need to write test cases ourselves. There will be problems with missing test scenarios, which will lead to large deviations in the final test results. Do you have a specific test database or test script? If we use the same test data as yours, the test results will be more fair and creditable. Secondly, the CLI is not convenient for large-scale automated testing. It may be a problem.

2. Can you provide some details about fine-grained test measures ? Where can I find or study ? In SQlite source code ? 

I really want to know precise data after modify compiler-time options. If we write some testcases and the test resuls is good, the situation is not creditable becasue we are not only referees but also athletes.

(4) By Simon Slavin (slavin) on 2021-06-29 14:11:49 in reply to 3 [link] [source]

The key question here: what do you mean by "performance degradation" ? Are you interested in execution time or something else ?

1. If you need large-scale testing over different scenarios on a platform, recommendations for testing setups will come from people familiar with the platform, not with SQLite. SQLite is very simple. It reads and writes from storage, and does processing. It knows little about the platform it runs on, and uses little access to operating system facilities. It cannot be used to coordinate cross-platform or cross-scenario testing.

My advice is to find people familiar with your platform: hardware, OS, and however your scenarios differ. Ask those people what they'd use for large-scale automated testing of an executable program. Whatever program you're using to call SQLite doesn't matter, it's just an executable, and can be tested like any other executable.

2. How fine do you want your grains ? SQLite is a C API. You can use C, including anything in <time.h> or <ctime>. Or you can use command-line facilities to test the time taken for the entire executable program in whatever shell you're using (e.g. the 'time' command in Linux). Or are you testing for something other than timing ?

3. Overall, SQLite is storage-bound. The processing algorithms are very well tuned. Most of the time taken is in reading or writing to storage, and controlled by cache sizes, latency, and out-of-order writing on your motherboard or in your storage device, not inside SQLite. You can do a ton of testing with one particular setup, then find your 'fastest compilation options' change because in production the program is running on a later model of hard disk which has a bigger cache, but the manufacturer doesn't advertise the change.

Changes in SQLite compiler-time directives may have a small effect. Changes in your hardware may have far greater effects.

(5) By Larry Brasfield (larrybr) on 2021-06-29 16:36:34 in reply to 3 [link] [source]

it is stated that our devices provide platform capabilities.

That means almost nothing to me.

If CLI is used as a testing tool, there may be two problems. First, if it is only for a single scenario, the CLI is definitely okay. If it involves multiple scenarios, We need to write test cases ourselves.

I do not see a reasonable way to avoid creating your own test cases. If you intend to assess only a composite measure of performance for tests covering a mix of scenarios chosen by SQLite developers, I suppose you could just run speed tests already written and appearing in the <project-root>/test directory. (Look for "speed_trial" in the .test files. That all comes with the source.) You will have to come up with your own weighting of results.

When I suggested using the CLI and its timer feature, I thought you would have some scenarios about which performance was a particular concern.

There will be problems with missing test scenarios, which will lead to large deviations in the final test results.

Deviations from what?

Do you have a specific test database or test script?

No.

If we use the same test data as yours, the test results will be more fair and creditable.

Fair? What could be more fair to you (or your customers) than something tailored to their usages for your preferred build options?

Secondly, the CLI is not convenient for large-scale automated testing. It may be a problem.

Why is it inconvenient? It is perfectly scriptable and can be made to run arbitrary sets of commands. Your "problem" seems purely hypothetical.

Can you provide some details about fine-grained test measures ? Where can I find or study ? In SQlite source code ?

You should study the speed tests I mention above. They accompany the source code.

I really want to know precise data after modify compiler-time options. If we write some testcases and the test resuls is good, the situation is not creditable becasue we are not only referees but also athletes.

The results, however obtained, are going to change with some compilation options. You can avoid the implied test selection bias issue by deciding what the important use cases are ahead of your performance assessments.

I suspect that you labor under the misapprehension that the project has some master performance measure against which you can compare results. It does not; performance is observed and managed, but it is multi-faceted. (That is why there are performance-related build options.)