[edit]
Uncertainty Quantification for Metamodels
Proceedings of the Thirteenth Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 230:315-344, 2024.
Abstract
In the realm of computational science, metamodels serve as indispensable tools for approximating complex systems, facilitating the exploration of scenarios where traditional modelling may prove computationally infeasible. However, the inherent uncertainties within these metamodels, particularly those driven by Machine Learning (ML), necessitate rigorous quantification to ensure reliability and robustness in decision-making processes. One alternative of obtaining uncertainty estimates is using ML models that have a native notion of uncertainty, such as the Bayesian Neural Networks (BNNs), however its repeated sampling necessary to approximate the output distribution is computationally demanding and might defeat the purpose of building metamodels in the first place. In datasets with multidimensional input space and a limited amount of training examples, error estimates provided by BNNs often have poor quality. This study explores alternative empirical approaches to uncertainty quantification, based on knowledge extraction from output space as opposed to input space. Leveraging patterns of magnitude of error committed by the metamodel in output space, we obtain significant improvement of adaptivity of prediction intervals, both over pure Conformal Prediction (CP) and BNNs. Our findings underscore the potential of integrating diverse uncertainty quantification methods to fortify reliability of metamodels, highlighting their robust and quantifiable confidence in model predictions.