Multi-fidelity Bayesian Optimisation with Continuous Approximations

Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1799-1808, 2017.

Abstract

Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data $N$ and/or few training iterations $T$. Here, the approximations are best viewed as arising out of a continuous two dimensional space $(N,T)$. In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-kandasamy17a, title = {Multi-fidelity {B}ayesian Optimisation with Continuous Approximations}, author = {Kirthevasan Kandasamy and Gautam Dasarathy and Jeff Schneider and Barnab{\'a}s P{\'o}czos}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1799--1808}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/kandasamy17a/kandasamy17a.pdf}, url = {https://proceedings.mlr.press/v70/kandasamy17a.html}, abstract = {Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data $N$ and/or few training iterations $T$. Here, the approximations are best viewed as arising out of a continuous two dimensional space $(N,T)$. In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.} }
Endnote
%0 Conference Paper %T Multi-fidelity Bayesian Optimisation with Continuous Approximations %A Kirthevasan Kandasamy %A Gautam Dasarathy %A Jeff Schneider %A Barnabás Póczos %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-kandasamy17a %I PMLR %P 1799--1808 %U https://proceedings.mlr.press/v70/kandasamy17a.html %V 70 %X Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. On the other hand, in many practical applications, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data $N$ and/or few training iterations $T$. Here, the approximations are best viewed as arising out of a continuous two dimensional space $(N,T)$. In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.
APA
Kandasamy, K., Dasarathy, G., Schneider, J. & Póczos, B.. (2017). Multi-fidelity Bayesian Optimisation with Continuous Approximations. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1799-1808 Available from https://proceedings.mlr.press/v70/kandasamy17a.html.

Related Material