Benchmark Demo Evolved

The Benchmark demo application that was created a few months ago, and introduced here, was missing a few important features. Most importantly, running a set of benchmarks was not possible. Benchmark mode (--mode benchmark) only ran one test, with features based on a preset target hardware level.

It was already possible to adjust everything using the UI, and then running another individual benchmark by clicking the Start Measuring button, but that does not make testing a bunch of things in a short amount of time very easy.

Now we have addressed that shortcoming, and made some other improvements.

What is New

Lack of a possibility to run several tests in a row automatically was not the only thing we changed. Let's take a glance at the smaller improvements first.

Adjustable Demo Mode Speed

Earlier the demo mode (--mode demo) running speed was fixed, which some found too slow-paced. Now there is an option to adjust the speed with --speed slow / normal / fast / veryfast. These translate to 30, 60, 120, and 240 second loops, respectively.

Additional Demo Content

A new small flying vessel was added to make the city seem a bit more alive. It is running in its own loop, and --speed parameter will not affect it.

benchmark_demo_evolved

And then to the Good Stuff.

Automatic Benchmark Sets

We have added a way to run benchmarks very quickly for a certain set of features. To quickly get a rough idea of the capabilities of the target hardware, an automatic mode has been added for benchmarks .

This mode allows testing different model types from the lowest triangle count model to the highest (--automatic model). It is also possible to try different model counts (--automatic modelcount), or combine these two (--automatic model --automatic modelcount).

Similar quick test sets can be run for light types (--automatic light) and light counts (--automatic lightcount), or a combination of those (--automatic light --automatic lightcount).

Finally, the quick test set can be run for texture sizes, including untextured (--automatic texture).

The other features, as well as the maximum light and model counts, model complexities, and texture sizes, in these test sets are based on the preset (entrylevel, midrange, highend) and target (embedded, desktop) settings of the run.

These quick test sets generate one combined report file by default, but can be forced to generate a separate report file for each individual test (--report multi).

But why leave it there? We know people will want to create specific test sets that suit their own target hardware.

Fully Customizable Test Sets

In addition to the quick test sets, we have added the option to create your own test sets, that allow controlling all supported features without having to use the UI for it. These test sets can be used with --testset /path/to/your/test/script.json command line parameters.

The test sets are in JSON format, and a few have been added in the repository (BenchmarkUI/testscripts) to get you started.

This new feature allows you to create test sets that are custom-built for your target hardware, concentrating on whatever is the most important part in your implementation.

A combined test report for a scripted test set is saved into a single file by default, starting with the script name. As with quick test sets, generating a report for each individual test can be forced by using --report multi.

Next we will take a look into creating those test scripts.

Creating Your Own Test Sets

Test scripts are JSON, and follow the normal JSON notation. Tests are added into the file as a list, and are run one after the other, generating test report from each individual test.

For example a script testing enabling and disabling shadow:

[
{
"name":"Directional light, shadows OFF",
"lightTypeIndex":0,
"lightInstanceCount":1,
"shadowsEnabled":false,
"msaaQualityIndex":0
},
{
"name":"Directional light, shadows ON",
"lightTypeIndex":0,
"lightInstanceCount":1,
"shadowsEnabled":true,
"msaaQualityIndex":0
}
]

If a value is not set in the following test sets, it will remain at the value set in the previous set. The only exception for this are the effects, which are always reset between tests.

The previous example could be done like this to keep it shorter:

[
{
"name":"Directional light, shadows OFF",
"lightTypeIndex":0,
"lightInstanceCount":1,
"shadowsEnabled":false,
"msaaQualityIndex":0
},
{
"name":"Directional light, shadows ON",
"shadowsEnabled":true
}
]

This will keep the values for light type (lightTypeIndex), light count (lightInstanceCount) and anti-aliasing quality (msaaQualityIndex) unchanged from the first set, changing only shadowsEnabled value in the second set.

Effects are handled a bit differently, and are reset between test sets. First of all, they are given as an array, so we can add multiple effects easily.

For example, if we were to add some effects to the first test set like this:

[
{
"name":"Directional light, shadows OFF",
"lightTypeIndex":0,
"lightInstanceCount":1,
"shadowsEnabled":false,
"msaaQualityIndex":0,
"effects": [
"Desaturate",
"Fxaa"
]
},
{
"name":"Directional light, shadows ON",
"shadowsEnabled":true
}
]

They will not be applied to the second test set.

For a full description of the supported script notation names and values, please see https://git.qt.io/public-demos/qtquick3d/-/tree/master/BenchmarkDemo/BenchmarkUI/testscripts


Blog Topics:

Comments