This contains information about our specific profiling and benchmarking setup. Readers should be familiar with the documentation listed at the end of the page.
Our Setup
General principles
- Be curious. There’s an art to profiling and writing good benchmarks.
- Library benchmarks should focus on specific areas to test specific behaviors. You should know exactly what code to look at when a specific test’s benchmark results change.
- Benchmarks require minified release builds, so keep benchmarks as minimal as possible to reduce build time.
Project layout
Benchmarks should live in the benchmarks
directory and follow the same basic pattern as the Battleship benchmark. A separate lib
module is necessary if we want the same Figma doc to be available in additional apps, such as Validation.
The benchmarks themselves generally live in the benchmark
module’s main sourceSet. The Battleship benchmark files are in benchmarks/battleship/benchmark/src/main/java/com/android/designcompose/benchmark/battleship/benchmark/
. Open the file with Android Studio and run the ones you want to test specifically.
Build variants
Benchmarks use a separate build variant (benchmark
) which extends from the release
variant. The main DesignCompose library also has a benchmark
variant, specifically to let us use the Cargo plugin’s ABI filtering to only build the ABI of the Rust libraries that we need, rather than all 4 every time.
Tips
When writing a new benchmark, use Dry Run mode to test benchmark behavior using only a single iteration, rather than altering the number of runs in the test.
Whenever investigating traces, use the Perfetto WebUI, it’s much better than Android Studio’s.
Benchmarks will output traces of each run. This can be helpful for tracing the same behavior with different code changes.