Prediction Performance After Learning in Gaussian Process Regression

Johan Wagberg, Dave Zachariah, Thomas Schon, Petre Stoica
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1264-1272, 2017.

Abstract

This paper considers the quantification of the prediction performance in Gaussian process regression. The standard approach is to base the prediction error bars on the theoretical predictive variance, which is a lower bound on the mean square-error (MSE). This approach, however, does not take into account that the statistical model is learned from the data. We show that this omission leads to a systematic underestimation of the prediction errors. Starting from a generalization of the Cramér-Rao bound, we derive a more accurate MSE bound which provides a measure of uncertainty for prediction of Gaussian processes. The improved bound is easily computed and we illustrate it using synthetic and real data examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-wagberg17a, title = {{Prediction Performance After Learning in Gaussian Process Regression}}, author = {Wagberg, Johan and Zachariah, Dave and Schon, Thomas and Stoica, Petre}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1264--1272}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/wagberg17a/wagberg17a.pdf}, url = {https://proceedings.mlr.press/v54/wagberg17a.html}, abstract = {This paper considers the quantification of the prediction performance in Gaussian process regression. The standard approach is to base the prediction error bars on the theoretical predictive variance, which is a lower bound on the mean square-error (MSE). This approach, however, does not take into account that the statistical model is learned from the data. We show that this omission leads to a systematic underestimation of the prediction errors. Starting from a generalization of the Cramér-Rao bound, we derive a more accurate MSE bound which provides a measure of uncertainty for prediction of Gaussian processes. The improved bound is easily computed and we illustrate it using synthetic and real data examples.} }
Endnote
%0 Conference Paper %T Prediction Performance After Learning in Gaussian Process Regression %A Johan Wagberg %A Dave Zachariah %A Thomas Schon %A Petre Stoica %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-wagberg17a %I PMLR %P 1264--1272 %U https://proceedings.mlr.press/v54/wagberg17a.html %V 54 %X This paper considers the quantification of the prediction performance in Gaussian process regression. The standard approach is to base the prediction error bars on the theoretical predictive variance, which is a lower bound on the mean square-error (MSE). This approach, however, does not take into account that the statistical model is learned from the data. We show that this omission leads to a systematic underestimation of the prediction errors. Starting from a generalization of the Cramér-Rao bound, we derive a more accurate MSE bound which provides a measure of uncertainty for prediction of Gaussian processes. The improved bound is easily computed and we illustrate it using synthetic and real data examples.
APA
Wagberg, J., Zachariah, D., Schon, T. & Stoica, P.. (2017). Prediction Performance After Learning in Gaussian Process Regression. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:1264-1272 Available from https://proceedings.mlr.press/v54/wagberg17a.html.

Related Material