Incentivizing honest performative predictions with proper scoring rules

Caspar Oesterheld, Johannes Treutlein, Emery Cooper, Rubi Hudson
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:1564-1574, 2023.

Abstract

Proper scoring rules incentivize experts to accurately report beliefs, assuming predictions cannot influence outcomes. We relax this assumption and investigate incentives when predictions are performative, i.e., when they can influence the outcome of the prediction, such as when making public predictions about the stock market. We say a prediction is a fixed point if it accurately reflects the expert’s beliefs after that prediction has been made. We show that in this setting, reports maximizing expected score generally do not reflect an expert’s beliefs, and we give bounds on the inaccuracy of such reports. We show that, for binary predictions, if the influence of the expert’s prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points. However, this is impossible for predictions over more than two outcomes. We also perform numerical simulations in a toy setting, showing that our bounds are tight in some situations and that prediction error is often substantial (greater than 5-10%). Lastly, we discuss alternative notions of optimality, including performative stability, and show that they incentivize reporting fixed points.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-oesterheld23a, title = {Incentivizing honest performative predictions with proper scoring rules}, author = {Oesterheld, Caspar and Treutlein, Johannes and Cooper, Emery and Hudson, Rubi}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {1564--1574}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/oesterheld23a/oesterheld23a.pdf}, url = {https://proceedings.mlr.press/v216/oesterheld23a.html}, abstract = {Proper scoring rules incentivize experts to accurately report beliefs, assuming predictions cannot influence outcomes. We relax this assumption and investigate incentives when predictions are performative, i.e., when they can influence the outcome of the prediction, such as when making public predictions about the stock market. We say a prediction is a fixed point if it accurately reflects the expert’s beliefs after that prediction has been made. We show that in this setting, reports maximizing expected score generally do not reflect an expert’s beliefs, and we give bounds on the inaccuracy of such reports. We show that, for binary predictions, if the influence of the expert’s prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points. However, this is impossible for predictions over more than two outcomes. We also perform numerical simulations in a toy setting, showing that our bounds are tight in some situations and that prediction error is often substantial (greater than 5-10%). Lastly, we discuss alternative notions of optimality, including performative stability, and show that they incentivize reporting fixed points.} }
Endnote
%0 Conference Paper %T Incentivizing honest performative predictions with proper scoring rules %A Caspar Oesterheld %A Johannes Treutlein %A Emery Cooper %A Rubi Hudson %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-oesterheld23a %I PMLR %P 1564--1574 %U https://proceedings.mlr.press/v216/oesterheld23a.html %V 216 %X Proper scoring rules incentivize experts to accurately report beliefs, assuming predictions cannot influence outcomes. We relax this assumption and investigate incentives when predictions are performative, i.e., when they can influence the outcome of the prediction, such as when making public predictions about the stock market. We say a prediction is a fixed point if it accurately reflects the expert’s beliefs after that prediction has been made. We show that in this setting, reports maximizing expected score generally do not reflect an expert’s beliefs, and we give bounds on the inaccuracy of such reports. We show that, for binary predictions, if the influence of the expert’s prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points. However, this is impossible for predictions over more than two outcomes. We also perform numerical simulations in a toy setting, showing that our bounds are tight in some situations and that prediction error is often substantial (greater than 5-10%). Lastly, we discuss alternative notions of optimality, including performative stability, and show that they incentivize reporting fixed points.
APA
Oesterheld, C., Treutlein, J., Cooper, E. & Hudson, R.. (2023). Incentivizing honest performative predictions with proper scoring rules. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:1564-1574 Available from https://proceedings.mlr.press/v216/oesterheld23a.html.

Related Material