Regret Bounds for Transfer Learning in Bayesian Optimisation
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:307-315, 2017.
This paper studies the regret bound of two transfer learning algorithms in Bayesian optimisation. The first algorithm models any difference between the source and target functions as a noise process. The second algorithm proposes a new way to model the difference between the source and target as a Gaussian process which is then used to adapt the source data. We show that in both cases the regret bounds are tighter than in the no transfer case. We also experimentally compare the performance of these algorithms relative to no transfer learning and demonstrate benefits of transfer learning.