Instance selection for configuration performance comparison

Marie Anastacio, Théo Matricon, Holger Hoos
ECMLPKDD Workshop on Meta-Knowledge Transfer, PMLR 191:11-23, 2022.

Abstract

Comparing the performance of two configurations of a given algorithm plays a critical role in algorithm configuration and performance optimisation, be it automated or manual, and requires substantial computational resources. Time is often wasted on less promising configurations but also on instances that require a long running time regardless of the configuration. Prior work has shown that by running an algorithm on carefully selected instances, the time required to accurately decide the better of two given algorithms can be significantly reduced. In this work, we explore ways to apply a similar selection process to compare two configurations of the same algorithm. We adapted four selection methods from the literature to work with the performance model used in model-based configurators and evaluated them on six benchmarks. Our experimental evaluation shows that, depending on the problem instances and their running time distribution, a decision can be reached 5 to 3000 times faster than with random sampling, the method used in current state-of-the-art configurators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v191-anastacio22a, title = {Challenges of Acquiring Compositional Inductive Biases via Meta-Learning}, author = {Anastacio, Marie and Matricon, Th\'eo and Hoos, Holger}, booktitle = {ECMLPKDD Workshop on Meta-Knowledge Transfer}, pages = {11--23}, year = {2022}, editor = {Brazdil, Pavel and van Rijn, Jan N. and Gouk, Henry and Mohr, Felix}, volume = {191}, series = {Proceedings of Machine Learning Research}, month = {23 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v191/anastacio22a/anastacio22a.pdf}, url = {https://proceedings.mlr.press/v191/anastacio22a.html}, abstract = {Comparing the performance of two configurations of a given algorithm plays a critical role in algorithm configuration and performance optimisation, be it automated or manual, and requires substantial computational resources. Time is often wasted on less promising configurations but also on instances that require a long running time regardless of the configuration. Prior work has shown that by running an algorithm on carefully selected instances, the time required to accurately decide the better of two given algorithms can be significantly reduced. In this work, we explore ways to apply a similar selection process to compare two configurations of the same algorithm. We adapted four selection methods from the literature to work with the performance model used in model-based configurators and evaluated them on six benchmarks. Our experimental evaluation shows that, depending on the problem instances and their running time distribution, a decision can be reached 5 to 3000 times faster than with random sampling, the method used in current state-of-the-art configurators.} }
Endnote
%0 Conference Paper %T Instance selection for configuration performance comparison %A Marie Anastacio %A Théo Matricon %A Holger Hoos %B ECMLPKDD Workshop on Meta-Knowledge Transfer %C Proceedings of Machine Learning Research %D 2022 %E Pavel Brazdil %E Jan N. van Rijn %E Henry Gouk %E Felix Mohr %F pmlr-v191-anastacio22a %I PMLR %P 11--23 %U https://proceedings.mlr.press/v191/anastacio22a.html %V 191 %X Comparing the performance of two configurations of a given algorithm plays a critical role in algorithm configuration and performance optimisation, be it automated or manual, and requires substantial computational resources. Time is often wasted on less promising configurations but also on instances that require a long running time regardless of the configuration. Prior work has shown that by running an algorithm on carefully selected instances, the time required to accurately decide the better of two given algorithms can be significantly reduced. In this work, we explore ways to apply a similar selection process to compare two configurations of the same algorithm. We adapted four selection methods from the literature to work with the performance model used in model-based configurators and evaluated them on six benchmarks. Our experimental evaluation shows that, depending on the problem instances and their running time distribution, a decision can be reached 5 to 3000 times faster than with random sampling, the method used in current state-of-the-art configurators.
APA
Anastacio, M., Matricon, T. & Hoos, H.. (2022). Instance selection for configuration performance comparison. ECMLPKDD Workshop on Meta-Knowledge Transfer, in Proceedings of Machine Learning Research 191:11-23 Available from https://proceedings.mlr.press/v191/anastacio22a.html.

Related Material