On Valid Uncertainty Quantification About a Model

Ryan Martin
Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications, PMLR 103:295-303, 2019.

Abstract

Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v103-martin19b, title = {On Valid Uncertainty Quantification About a Model}, author = {Martin, Ryan}, booktitle = {Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications}, pages = {295--303}, year = {2019}, editor = {De Bock, Jasper and de Campos, Cassio P. and de Cooman, Gert and Quaeghebeur, Erik and Wheeler, Gregory}, volume = {103}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v103/martin19b/martin19b.pdf}, url = {https://proceedings.mlr.press/v103/martin19b.html}, abstract = {Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.} }
Endnote
%0 Conference Paper %T On Valid Uncertainty Quantification About a Model %A Ryan Martin %B Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications %C Proceedings of Machine Learning Research %D 2019 %E Jasper De Bock %E Cassio P. de Campos %E Gert de Cooman %E Erik Quaeghebeur %E Gregory Wheeler %F pmlr-v103-martin19b %I PMLR %P 295--303 %U https://proceedings.mlr.press/v103/martin19b.html %V 103 %X Inference on parameters within a given model is familiar, as is ranking different models for the purpose of selection. Less familiar, however, is the quantification of uncertainty about the models themselves. A Bayesian approach provides a posterior distribution for the model but it comes with no validity guarantees, and, therefore, is only suited for ranking and selection. In this paper, I will present an alternative way to view this model uncertainty problem, through the lens of a valid inferential model based on random sets and non-additive beliefs. Specifically, I will show that valid uncertainty quantification about a model is attainable within this framework in general, and highlight the benefits in a classical signal detection problem.
APA
Martin, R.. (2019). On Valid Uncertainty Quantification About a Model. Proceedings of the Eleventh International Symposium on Imprecise Probabilities: Theories and Applications, in Proceedings of Machine Learning Research 103:295-303 Available from https://proceedings.mlr.press/v103/martin19b.html.

Related Material