On Valid Uncertainty Quantification About a Model

[edit]

Ryan Martin ;
Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications, PMLR 103:295-303, 2019.

Abstract

Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.

Related Material