[edit]

# Fiducial inference viewed through a possibility-theoretic inferential model lens

*Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications*, PMLR 215:299-310, 2023.

#### Abstract

Fisher’s fiducial argument is widely viewed as a failed version of Neyman’s theory of confidence limits. But Fisher’s goal—Bayesian-like probabilistic uncertainty quantification without priors—was more ambitious than Neyman’s, and it’s not out of reach. I’ve recently shown that reliable, prior-free probabilistic uncertainty quantification must be grounded in the theory of imprecise probability, and I’ve put forward a possibility-theoretic solution that achieves it. This has been met with resistance, however, in part due to the statistical community’s singular focus on confidence limits. Indeed, if imprecision isn’t needed to answer confidence-limit-related questions, then what’s the point? In this paper, for a class of practically useful models, I explain specifically why the fiducial argument gives valid confidence limits, i.e., it’s the “best probabilistic approximation” of the possibilistic solution I recently advanced. This sheds new light on what the fiducial argument is doing and on what’s lost in terms of reliability when imprecision is ignored and the fiducial argument is pushed for more than just confidence limits.