Charles Explorer logo
🇬🇧

Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results

Publication at Faculty of Mathematics and Physics |
2006

Abstract

Benchmarking as a method of assessing software performance is known to suffer from random fluctuations that distort the observed performance. In this paper, we focus on the fluctuations caused by compilation.

We show that the design of a benchmarking experiment must reflect the existence of the fluctuations if the performance observed during the experiment is to be representative of reality. We present a new statistical model of a benchmark experiment that reflects the presence of the fluctuations in compilation, execution and measurement. The model describes the observed performance and makes it possible to calculate the optimum dimensions of the experiment that yield the best precision within a given amount of time. Using a variety of benchmarks, we evaluate the model within the context of regression benchmarking.

We show that the model significantly decreases the number of erroneously detected performance changes in regression benchmarking.