Regret Bounds for Transfer Learning in Bayesian Optimisation

Alistair Shilton, Sunil Gupta, Santu Rana, Svetha Venkatesh
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:307-315, 2017.

Abstract

This paper studies the regret bound of two transfer learning algorithms in Bayesian optimisation. The first algorithm models any difference between the source and target functions as a noise process. The second algorithm proposes a new way to model the difference between the source and target as a Gaussian process which is then used to adapt the source data. We show that in both cases the regret bounds are tighter than in the no transfer case. We also experimentally compare the performance of these algorithms relative to no transfer learning and demonstrate benefits of transfer learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-shilton17a, title = {{Regret Bounds for Transfer Learning in Bayesian Optimisation}}, author = {Shilton, Alistair and Gupta, Sunil and Rana, Santu and Venkatesh, Svetha}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {307--315}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/shilton17a/shilton17a.pdf}, url = {https://proceedings.mlr.press/v54/shilton17a.html}, abstract = {This paper studies the regret bound of two transfer learning algorithms in Bayesian optimisation. The first algorithm models any difference between the source and target functions as a noise process. The second algorithm proposes a new way to model the difference between the source and target as a Gaussian process which is then used to adapt the source data. We show that in both cases the regret bounds are tighter than in the no transfer case. We also experimentally compare the performance of these algorithms relative to no transfer learning and demonstrate benefits of transfer learning.} }
Endnote
%0 Conference Paper %T Regret Bounds for Transfer Learning in Bayesian Optimisation %A Alistair Shilton %A Sunil Gupta %A Santu Rana %A Svetha Venkatesh %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-shilton17a %I PMLR %P 307--315 %U https://proceedings.mlr.press/v54/shilton17a.html %V 54 %X This paper studies the regret bound of two transfer learning algorithms in Bayesian optimisation. The first algorithm models any difference between the source and target functions as a noise process. The second algorithm proposes a new way to model the difference between the source and target as a Gaussian process which is then used to adapt the source data. We show that in both cases the regret bounds are tighter than in the no transfer case. We also experimentally compare the performance of these algorithms relative to no transfer learning and demonstrate benefits of transfer learning.
APA
Shilton, A., Gupta, S., Rana, S. & Venkatesh, S.. (2017). Regret Bounds for Transfer Learning in Bayesian Optimisation. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:307-315 Available from https://proceedings.mlr.press/v54/shilton17a.html.

Related Material