Improving Hyperparameter Learning under Approximate Inference in Gaussian Process Models

Rui Li, S. T. John, Arno Solin
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:19595-19615, 2023.

Abstract

Approximate inference in Gaussian process (GP) models with non-conjugate likelihoods gets entangled with the learning of the model hyperparameters. We improve hyperparameter learning in GP models and focus on the interplay between variational inference (VI) and the learning target. While VI’s lower bound to the marginal likelihood is a suitable objective for inferring the approximate posterior, we show that a direct approximation of the marginal likelihood as in Expectation Propagation (EP) is a better learning objective for hyperparameter optimization. We design a hybrid training procedure to bring the best of both worlds: it leverages conjugate-computation VI for inference and uses an EP-like marginal likelihood approximation for hyperparameter learning. We compare VI, EP, Laplace approximation, and our proposed training procedure and empirically demonstrate the effectiveness of our proposal across a wide range of data sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-li23m, title = {Improving Hyperparameter Learning under Approximate Inference in {G}aussian Process Models}, author = {Li, Rui and John, S. T. and Solin, Arno}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {19595--19615}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/li23m/li23m.pdf}, url = {https://proceedings.mlr.press/v202/li23m.html}, abstract = {Approximate inference in Gaussian process (GP) models with non-conjugate likelihoods gets entangled with the learning of the model hyperparameters. We improve hyperparameter learning in GP models and focus on the interplay between variational inference (VI) and the learning target. While VI’s lower bound to the marginal likelihood is a suitable objective for inferring the approximate posterior, we show that a direct approximation of the marginal likelihood as in Expectation Propagation (EP) is a better learning objective for hyperparameter optimization. We design a hybrid training procedure to bring the best of both worlds: it leverages conjugate-computation VI for inference and uses an EP-like marginal likelihood approximation for hyperparameter learning. We compare VI, EP, Laplace approximation, and our proposed training procedure and empirically demonstrate the effectiveness of our proposal across a wide range of data sets.} }
Endnote
%0 Conference Paper %T Improving Hyperparameter Learning under Approximate Inference in Gaussian Process Models %A Rui Li %A S. T. John %A Arno Solin %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-li23m %I PMLR %P 19595--19615 %U https://proceedings.mlr.press/v202/li23m.html %V 202 %X Approximate inference in Gaussian process (GP) models with non-conjugate likelihoods gets entangled with the learning of the model hyperparameters. We improve hyperparameter learning in GP models and focus on the interplay between variational inference (VI) and the learning target. While VI’s lower bound to the marginal likelihood is a suitable objective for inferring the approximate posterior, we show that a direct approximation of the marginal likelihood as in Expectation Propagation (EP) is a better learning objective for hyperparameter optimization. We design a hybrid training procedure to bring the best of both worlds: it leverages conjugate-computation VI for inference and uses an EP-like marginal likelihood approximation for hyperparameter learning. We compare VI, EP, Laplace approximation, and our proposed training procedure and empirically demonstrate the effectiveness of our proposal across a wide range of data sets.
APA
Li, R., John, S.T. & Solin, A.. (2023). Improving Hyperparameter Learning under Approximate Inference in Gaussian Process Models. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:19595-19615 Available from https://proceedings.mlr.press/v202/li23m.html.

Related Material