Transformed Gaussian Processes for Characterizing a Model’s Discrepancy

Aurélien Nioche, Ville Tanskanen, Marcelo Hartmann, Arto Klami
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:991-1006, 2024.

Abstract

Mathematical models of observational phenomena are at the core of experimental sciences. By learning the parameters of such models from typically noisy observations, we can interpret and predict the phenomena under investigation. This process, however, assumes that the model itself is correct and that we are only uncertain of its parameters. In practice, this is rarely true, but rather the model is a simplification of the actual generative process. One proposed remedy is a post hoc investigation of how the model differs from reality, by explicitly modeling the discrepancy between the two. In this paper, we use transformed Gaussian processes as flexible models for this. Our formulation relaxes the assumption on the correctness of the model by assuming it is only correct in expectation, and it directly supports both additive and multiplicative corrections, treated separately in the literature, using suitable transformations. We demonstrate the approach in two example cases: modeling human growth (relation age-height) and modeling the risk attitude (relation reward-utility). The former provides a simple example, while the second case highlights the importance of the transformations in obtaining meaningful information about the discrepancy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-nioche24a, title = {Transformed Gaussian Processes for Characterizing a Model’s Discrepancy}, author = {Nioche, Aur\'elien and Tanskanen, Ville and Hartmann, Marcelo and Klami, Arto}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {991--1006}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/nioche24a/nioche24a.pdf}, url = {https://proceedings.mlr.press/v222/nioche24a.html}, abstract = {Mathematical models of observational phenomena are at the core of experimental sciences. By learning the parameters of such models from typically noisy observations, we can interpret and predict the phenomena under investigation. This process, however, assumes that the model itself is correct and that we are only uncertain of its parameters. In practice, this is rarely true, but rather the model is a simplification of the actual generative process. One proposed remedy is a post hoc investigation of how the model differs from reality, by explicitly modeling the discrepancy between the two. In this paper, we use transformed Gaussian processes as flexible models for this. Our formulation relaxes the assumption on the correctness of the model by assuming it is only correct in expectation, and it directly supports both additive and multiplicative corrections, treated separately in the literature, using suitable transformations. We demonstrate the approach in two example cases: modeling human growth (relation age-height) and modeling the risk attitude (relation reward-utility). The former provides a simple example, while the second case highlights the importance of the transformations in obtaining meaningful information about the discrepancy.} }
Endnote
%0 Conference Paper %T Transformed Gaussian Processes for Characterizing a Model’s Discrepancy %A Aurélien Nioche %A Ville Tanskanen %A Marcelo Hartmann %A Arto Klami %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-nioche24a %I PMLR %P 991--1006 %U https://proceedings.mlr.press/v222/nioche24a.html %V 222 %X Mathematical models of observational phenomena are at the core of experimental sciences. By learning the parameters of such models from typically noisy observations, we can interpret and predict the phenomena under investigation. This process, however, assumes that the model itself is correct and that we are only uncertain of its parameters. In practice, this is rarely true, but rather the model is a simplification of the actual generative process. One proposed remedy is a post hoc investigation of how the model differs from reality, by explicitly modeling the discrepancy between the two. In this paper, we use transformed Gaussian processes as flexible models for this. Our formulation relaxes the assumption on the correctness of the model by assuming it is only correct in expectation, and it directly supports both additive and multiplicative corrections, treated separately in the literature, using suitable transformations. We demonstrate the approach in two example cases: modeling human growth (relation age-height) and modeling the risk attitude (relation reward-utility). The former provides a simple example, while the second case highlights the importance of the transformations in obtaining meaningful information about the discrepancy.
APA
Nioche, A., Tanskanen, V., Hartmann, M. & Klami, A.. (2024). Transformed Gaussian Processes for Characterizing a Model’s Discrepancy. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:991-1006 Available from https://proceedings.mlr.press/v222/nioche24a.html.

Related Material