[edit]
Transformed Gaussian Processes for Characterizing a Model’s Discrepancy
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:991-1006, 2024.
Abstract
Mathematical models of observational phenomena are at the core of experimental sciences. By learning the parameters of such models from typically noisy observations, we can interpret and predict the phenomena under investigation. This process, however, assumes that the model itself is correct and that we are only uncertain of its parameters. In practice, this is rarely true, but rather the model is a simplification of the actual generative process. One proposed remedy is a post hoc investigation of how the model differs from reality, by explicitly modeling the discrepancy between the two. In this paper, we use transformed Gaussian processes as flexible models for this. Our formulation relaxes the assumption on the correctness of the model by assuming it is only correct in expectation, and it directly supports both additive and multiplicative corrections, treated separately in the literature, using suitable transformations. We demonstrate the approach in two example cases: modeling human growth (relation age-height) and modeling the risk attitude (relation reward-utility). The former provides a simple example, while the second case highlights the importance of the transformations in obtaining meaningful information about the discrepancy.