Learning in Stochastic Monotone Games with Decision-Dependent Data

Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian Ratliff
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:5891-5912, 2022.

Abstract

Learning problems commonly exhibit an interesting feedback mechanism wherein the population data reacts to competing decision makers’ actions. This paper formulates a new game theoretic framework for this phenomenon, called multi-player performative prediction. We establish transparent sufficient conditions for strong monotonicity of the game and use them to develop algorithms for finding Nash equilibria. We investigate derivative free methods and adaptive gradient algorithms wherein each player alternates between learning a parametric description of their distribution and gradient steps on the empirical risk. Synthetic and semi-synthetic numerical experiments illustrate the results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-narang22a, title = { Learning in Stochastic Monotone Games with Decision-Dependent Data }, author = {Narang, Adhyyan and Faulkner, Evan and Drusvyatskiy, Dmitriy and Fazel, Maryam and Ratliff, Lillian}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {5891--5912}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/narang22a/narang22a.pdf}, url = {https://proceedings.mlr.press/v151/narang22a.html}, abstract = { Learning problems commonly exhibit an interesting feedback mechanism wherein the population data reacts to competing decision makers’ actions. This paper formulates a new game theoretic framework for this phenomenon, called multi-player performative prediction. We establish transparent sufficient conditions for strong monotonicity of the game and use them to develop algorithms for finding Nash equilibria. We investigate derivative free methods and adaptive gradient algorithms wherein each player alternates between learning a parametric description of their distribution and gradient steps on the empirical risk. Synthetic and semi-synthetic numerical experiments illustrate the results. } }
Endnote
%0 Conference Paper %T Learning in Stochastic Monotone Games with Decision-Dependent Data %A Adhyyan Narang %A Evan Faulkner %A Dmitriy Drusvyatskiy %A Maryam Fazel %A Lillian Ratliff %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-narang22a %I PMLR %P 5891--5912 %U https://proceedings.mlr.press/v151/narang22a.html %V 151 %X Learning problems commonly exhibit an interesting feedback mechanism wherein the population data reacts to competing decision makers’ actions. This paper formulates a new game theoretic framework for this phenomenon, called multi-player performative prediction. We establish transparent sufficient conditions for strong monotonicity of the game and use them to develop algorithms for finding Nash equilibria. We investigate derivative free methods and adaptive gradient algorithms wherein each player alternates between learning a parametric description of their distribution and gradient steps on the empirical risk. Synthetic and semi-synthetic numerical experiments illustrate the results.
APA
Narang, A., Faulkner, E., Drusvyatskiy, D., Fazel, M. & Ratliff, L.. (2022). Learning in Stochastic Monotone Games with Decision-Dependent Data . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:5891-5912 Available from https://proceedings.mlr.press/v151/narang22a.html.

Related Material