Fiducial inference viewed through a possibility-theoretic inferential model lens

Ryan Martin
Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, PMLR 215:299-310, 2023.

Abstract

Fisher’s fiducial argument is widely viewed as a failed version of Neyman’s theory of confidence limits. But Fisher’s goal—Bayesian-like probabilistic uncertainty quantification without priors—was more ambitious than Neyman’s, and it’s not out of reach. I’ve recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I’ve put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to the statistical community’s singular focus on confidence limits. Indeed, if imprecision isn’t needed to answer confidence-limit-related questions, then what’s the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it’s the “best probabilistic approximation” of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what’s lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v215-martin23a, title = {Fiducial inference viewed through a possibility-theoretic inferential model lens}, author = {Martin, Ryan}, booktitle = {Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications}, pages = {299--310}, year = {2023}, editor = {Miranda, Enrique and Montes, Ignacio and Quaeghebeur, Erik and Vantaggi, Barbara}, volume = {215}, series = {Proceedings of Machine Learning Research}, month = {11--14 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v215/martin23a/martin23a.pdf}, url = {https://proceedings.mlr.press/v215/martin23a.html}, abstract = {Fisher’s fiducial argument is widely viewed as a failed version of Neyman’s theory of confidence limits. But Fisher’s goal—Bayesian-like probabilistic uncertainty quantification without priors—was more ambitious than Neyman’s, and it’s not out of reach. I’ve recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I’ve put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to the statistical community’s singular focus on confidence limits. Indeed, if imprecision isn’t needed to answer confidence-limit-related questions, then what’s the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it’s the “best probabilistic approximation” of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what’s lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.} }
Endnote
%0 Conference Paper %T Fiducial inference viewed through a possibility-theoretic inferential model lens %A Ryan Martin %B Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications %C Proceedings of Machine Learning Research %D 2023 %E Enrique Miranda %E Ignacio Montes %E Erik Quaeghebeur %E Barbara Vantaggi %F pmlr-v215-martin23a %I PMLR %P 299--310 %U https://proceedings.mlr.press/v215/martin23a.html %V 215 %X Fisher’s fiducial argument is widely viewed as a failed version of Neyman’s theory of confidence limits. But Fisher’s goal—Bayesian-like probabilistic uncertainty quantification without priors—was more ambitious than Neyman’s, and it’s not out of reach. I’ve recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I’ve put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to the statistical community’s singular focus on confidence limits. Indeed, if imprecision isn’t needed to answer confidence-limit-related questions, then what’s the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it’s the “best probabilistic approximation” of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what’s lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.
APA
Martin, R.. (2023). Fiducial inference viewed through a possibility-theoretic inferential model lens. Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, in Proceedings of Machine Learning Research 215:299-310 Available from https://proceedings.mlr.press/v215/martin23a.html.

Related Material