Total Evidence and Learning with Imprecise Probabilities

Ruobin Gong, Joseph B. Kadane, Mark J. Schervish, Teddy Seidenfeld
Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications, PMLR 147:161-168, 2021.

Abstract

In dynamic learning, a rational agent must revise their credence about a question of interest in accordance with the total evidence available between the earlier and later times. We discuss situations in which an observable event $F$ that is sufficient for the total evidence can be identified, yet its probabilistic modeling cannot be performed in a precise manner. The agent may employ imprecise (IP) models of reasoning to account for the identified sufficient event, and perform change of credence or sequential decisions accordingly. Our proposal is illustrated with three case studies: the classic Monty Hall problem, statistical inference with non-ignorable missing data, and the use of forward induction in a two-person sequential game.

Cite this Paper


BibTeX
@InProceedings{pmlr-v147-gong21a, title = {Total Evidence and Learning with Imprecise Probabilities}, author = {Gong, Ruobin and Kadane, Joseph B. and Schervish, Mark J. and Seidenfeld, Teddy}, booktitle = {Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications}, pages = {161--168}, year = {2021}, editor = {Cano, Andrés and De Bock, Jasper and Miranda, Enrique and Moral, Serafı́n}, volume = {147}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v147/gong21a/gong21a.pdf}, url = {https://proceedings.mlr.press/v147/gong21a.html}, abstract = {In dynamic learning, a rational agent must revise their credence about a question of interest in accordance with the total evidence available between the earlier and later times. We discuss situations in which an observable event $F$ that is sufficient for the total evidence can be identified, yet its probabilistic modeling cannot be performed in a precise manner. The agent may employ imprecise (IP) models of reasoning to account for the identified sufficient event, and perform change of credence or sequential decisions accordingly. Our proposal is illustrated with three case studies: the classic Monty Hall problem, statistical inference with non-ignorable missing data, and the use of forward induction in a two-person sequential game.} }
Endnote
%0 Conference Paper %T Total Evidence and Learning with Imprecise Probabilities %A Ruobin Gong %A Joseph B. Kadane %A Mark J. Schervish %A Teddy Seidenfeld %B Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications %C Proceedings of Machine Learning Research %D 2021 %E Andrés Cano %E Jasper De Bock %E Enrique Miranda %E Serafı́n Moral %F pmlr-v147-gong21a %I PMLR %P 161--168 %U https://proceedings.mlr.press/v147/gong21a.html %V 147 %X In dynamic learning, a rational agent must revise their credence about a question of interest in accordance with the total evidence available between the earlier and later times. We discuss situations in which an observable event $F$ that is sufficient for the total evidence can be identified, yet its probabilistic modeling cannot be performed in a precise manner. The agent may employ imprecise (IP) models of reasoning to account for the identified sufficient event, and perform change of credence or sequential decisions accordingly. Our proposal is illustrated with three case studies: the classic Monty Hall problem, statistical inference with non-ignorable missing data, and the use of forward induction in a two-person sequential game.
APA
Gong, R., Kadane, J.B., Schervish, M.J. & Seidenfeld, T.. (2021). Total Evidence and Learning with Imprecise Probabilities. Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications, in Proceedings of Machine Learning Research 147:161-168 Available from https://proceedings.mlr.press/v147/gong21a.html.

Related Material