Beyond Bernoulli: Generating Random Outcomes that cannot be Distinguished from Nature

Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:342-380, 2022.

Abstract

Recently, Dwork et al. (STOC 2021) introduced Outcome Indistinguishability as a new desideratum for binary prediction tasks. Outcome Indistinguishability (OI) articulates the goals of prediction in the language of computational indistinguishability: a predictor is Outcome Indistinguishable if no computationally-bounded observer can distinguish Nature’s outcomes from outcomes that are generated based on the predictions. In this sense, OI suggests a generative model for binary outcomes that cannot be refuted given the empirical evidence and computational resources at hand. In this work, we extend Outcome Indistinguishability beyond Bernoulli, to outcomes that live in a large discrete or continuous domain. While the idea of OI for non-binary outcomes is natural for many applications, defining OI in generality is not simply a syntactic exercise. We introduce and study multiple definitions of OI—each with its own semantics—for predictors that completely specify each individuals’ outcome distributions, as well as predictors that only partially specify the outcome distributions through statistics, such as moments. With the definitions in place, we provide learning algorithms for producing OI generative outcome models for general random outcomes. Finally, we study the relation of Outcome Indistinguishability and Multicalibration of statistics (beyond the mean) and relate our findings to the recent work of Jung et al. (COLT 2021) on Moment Multicalibration. We find an equivalence between Outcome Indistinguishability and Multicalibration that is more subtle than in the binary case and sheds light on the techniques employed by Jung et al. to obtain Moment Multicalibration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-dwork22a, title = {Beyond Bernoulli: Generating Random Outcomes that cannot be Distinguished from Nature}, author = {Dwork, Cynthia and Kim, {Michael P.} and Reingold, Omer and Rothblum, {Guy N.} and Yona, Gal}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {342--380}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/dwork22a/dwork22a.pdf}, url = {https://proceedings.mlr.press/v167/dwork22a.html}, abstract = {Recently, Dwork et al. (STOC 2021) introduced Outcome Indistinguishability as a new desideratum for binary prediction tasks. Outcome Indistinguishability (OI) articulates the goals of prediction in the language of computational indistinguishability: a predictor is Outcome Indistinguishable if no computationally-bounded observer can distinguish Nature’s outcomes from outcomes that are generated based on the predictions. In this sense, OI suggests a generative model for binary outcomes that cannot be refuted given the empirical evidence and computational resources at hand. In this work, we extend Outcome Indistinguishability beyond Bernoulli, to outcomes that live in a large discrete or continuous domain. While the idea of OI for non-binary outcomes is natural for many applications, defining OI in generality is not simply a syntactic exercise. We introduce and study multiple definitions of OI—each with its own semantics—for predictors that completely specify each individuals’ outcome distributions, as well as predictors that only partially specify the outcome distributions through statistics, such as moments. With the definitions in place, we provide learning algorithms for producing OI generative outcome models for general random outcomes. Finally, we study the relation of Outcome Indistinguishability and Multicalibration of statistics (beyond the mean) and relate our findings to the recent work of Jung et al. (COLT 2021) on Moment Multicalibration. We find an equivalence between Outcome Indistinguishability and Multicalibration that is more subtle than in the binary case and sheds light on the techniques employed by Jung et al. to obtain Moment Multicalibration.} }
Endnote
%0 Conference Paper %T Beyond Bernoulli: Generating Random Outcomes that cannot be Distinguished from Nature %A Cynthia Dwork %A Michael P. Kim %A Omer Reingold %A Guy N. Rothblum %A Gal Yona %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-dwork22a %I PMLR %P 342--380 %U https://proceedings.mlr.press/v167/dwork22a.html %V 167 %X Recently, Dwork et al. (STOC 2021) introduced Outcome Indistinguishability as a new desideratum for binary prediction tasks. Outcome Indistinguishability (OI) articulates the goals of prediction in the language of computational indistinguishability: a predictor is Outcome Indistinguishable if no computationally-bounded observer can distinguish Nature’s outcomes from outcomes that are generated based on the predictions. In this sense, OI suggests a generative model for binary outcomes that cannot be refuted given the empirical evidence and computational resources at hand. In this work, we extend Outcome Indistinguishability beyond Bernoulli, to outcomes that live in a large discrete or continuous domain. While the idea of OI for non-binary outcomes is natural for many applications, defining OI in generality is not simply a syntactic exercise. We introduce and study multiple definitions of OI—each with its own semantics—for predictors that completely specify each individuals’ outcome distributions, as well as predictors that only partially specify the outcome distributions through statistics, such as moments. With the definitions in place, we provide learning algorithms for producing OI generative outcome models for general random outcomes. Finally, we study the relation of Outcome Indistinguishability and Multicalibration of statistics (beyond the mean) and relate our findings to the recent work of Jung et al. (COLT 2021) on Moment Multicalibration. We find an equivalence between Outcome Indistinguishability and Multicalibration that is more subtle than in the binary case and sheds light on the techniques employed by Jung et al. to obtain Moment Multicalibration.
APA
Dwork, C., Kim, M.P., Reingold, O., Rothblum, G.N. & Yona, G.. (2022). Beyond Bernoulli: Generating Random Outcomes that cannot be Distinguished from Nature. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:342-380 Available from https://proceedings.mlr.press/v167/dwork22a.html.

Related Material