[edit]
Instance selection for configuration performance comparison
ECMLPKDD Workshop on Meta-Knowledge Transfer, PMLR 191:11-23, 2022.
Abstract
Comparing the performance of two configurations of a given algorithm plays a critical role in algorithm configuration and performance optimisation, be it automated or manual, and requires substantial computational resources. Time is often wasted on less promising configurations but also on instances that require a long running time regardless of the configuration. Prior work has shown that by running an algorithm on carefully selected instances, the time required to accurately decide the better of two given algorithms can be significantly reduced. In this work, we explore ways to apply a similar selection process to compare two configurations of the same algorithm. We adapted four selection methods from the literature to work with the performance model used in model-based configurators and evaluated them on six benchmarks. Our experimental evaluation shows that, depending on the problem instances and their running time distribution, a decision can be reached 5 to 3000 times faster than with random sampling, the method used in current state-of-the-art configurators.