Accelerated Bayesian Optimisation through Weight-Prior Tuning

Alistair Shilton, Sunil Gupta, Santu Rana, Pratibha Vellanki, Cheng Li, Svetha Venkatesh, Laurence Park, Alessandra Sutti, David Rubin, Thomas Dorin, Alireza Vahid, Murray Height, Teo Slezak
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:635-645, 2020.

Abstract

Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance $K$, with an arbitrary Gaussian weight prior ${\bf w} \sim ormdist ({\bf 0},{\bf I})$. In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-shilton20a, title = {Accelerated Bayesian Optimisation through Weight-Prior Tuning}, author = {Shilton, Alistair and Gupta, Sunil and Rana, Santu and Vellanki, Pratibha and Li, Cheng and Venkatesh, Svetha and Park, Laurence and Sutti, Alessandra and Rubin, David and Dorin, Thomas and Vahid, Alireza and Height, Murray and Slezak, Teo}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {635--645}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/shilton20a/shilton20a.pdf}, url = {https://proceedings.mlr.press/v108/shilton20a.html}, abstract = {Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance $K$, with an arbitrary Gaussian weight prior ${\bf w} \sim ormdist ({\bf 0},{\bf I})$. In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.} }
Endnote
%0 Conference Paper %T Accelerated Bayesian Optimisation through Weight-Prior Tuning %A Alistair Shilton %A Sunil Gupta %A Santu Rana %A Pratibha Vellanki %A Cheng Li %A Svetha Venkatesh %A Laurence Park %A Alessandra Sutti %A David Rubin %A Thomas Dorin %A Alireza Vahid %A Murray Height %A Teo Slezak %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-shilton20a %I PMLR %P 635--645 %U https://proceedings.mlr.press/v108/shilton20a.html %V 108 %X Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance $K$, with an arbitrary Gaussian weight prior ${\bf w} \sim ormdist ({\bf 0},{\bf I})$. In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.
APA
Shilton, A., Gupta, S., Rana, S., Vellanki, P., Li, C., Venkatesh, S., Park, L., Sutti, A., Rubin, D., Dorin, T., Vahid, A., Height, M. & Slezak, T.. (2020). Accelerated Bayesian Optimisation through Weight-Prior Tuning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:635-645 Available from https://proceedings.mlr.press/v108/shilton20a.html.

Related Material