SQLite Forum

Which performance test tool should choose ?
Login
> it is stated that our devices provide platform capabilities.

That means almost nothing to me.

> If CLI is used as a testing tool, there may be two problems. First, if it is only for a single scenario, the CLI is definitely okay. If it involves multiple scenarios, We need to write test cases ourselves. 

I do not see a reasonable way to avoid creating your own test cases. If you intend to assess only a composite measure of performance for tests covering a mix of scenarios chosen by SQLite developers, I suppose you could just run speed tests already written and appearing in the \<project-root\>/test directory. (Look for "speed_trial" in the .test files. That all comes with the source.) You will have to come up with your own weighting of results.

When I suggested using the CLI and its timer feature, I thought you would have some scenarios about which performance was a particular concern.

> There will be problems with missing test scenarios, which will lead to large deviations in the final test results.

Deviations from what?

> Do you have a specific test database or test script?

No.

> If we use the same test data as yours, the test results will be more fair and creditable.

Fair? What could be more fair to you (or your customers) than something tailored to their usages for your preferred build options?

> Secondly, the CLI is not convenient for large-scale automated testing. It may be a problem.

Why is it inconvenient? It is perfectly scriptable and can be made to run arbitrary sets of commands. Your "problem" seems purely hypothetical.

> Can you provide some details about fine-grained test measures ? Where can I find or study ? In SQlite source code ?

You should study the speed tests I mention above. They accompany the source code.

> I really want to know precise data after modify compiler-time options. If we write some testcases and the test resuls is good, the situation is not creditable becasue we are not only referees but also athletes.

The results, however obtained, are going to change with some compilation options. You can avoid the implied test selection bias issue by deciding what the important use cases are ahead of your performance assessments.

I suspect that you labor under the misapprehension that the project has some master performance measure against which you can compare results. It does not; performance is observed and managed, but it is multi-faceted. (That is why there are performance-related build options.)