How to Learn when Data Reacts to Your Model: Performative Gradient Descent

Zachary Izzo, Lexing Ying, James Zou
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4641-4650, 2021.

Abstract

Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer’s risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. Under regularity assumptions on the performative loss, PerfGD is the first algorithm which provably converges to an optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-izzo21a, title = {How to Learn when Data Reacts to Your Model: Performative Gradient Descent}, author = {Izzo, Zachary and Ying, Lexing and Zou, James}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4641--4650}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/izzo21a/izzo21a.pdf}, url = {https://proceedings.mlr.press/v139/izzo21a.html}, abstract = {Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer’s risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. Under regularity assumptions on the performative loss, PerfGD is the first algorithm which provably converges to an optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.} }
Endnote
%0 Conference Paper %T How to Learn when Data Reacts to Your Model: Performative Gradient Descent %A Zachary Izzo %A Lexing Ying %A James Zou %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-izzo21a %I PMLR %P 4641--4650 %U https://proceedings.mlr.press/v139/izzo21a.html %V 139 %X Performative distribution shift captures the setting where the choice of which ML model is deployed changes the data distribution. For example, a bank which uses the number of open credit lines to determine a customer’s risk of default on a loan may induce customers to open more credit lines in order to improve their chances of being approved. Because of the interactions between the model and data distribution, finding the optimal model parameters is challenging. Works in this area have focused on finding stable points, which can be far from optimal. Here we introduce \emph{performative gradient descent} (PerfGD), an algorithm for computing performatively optimal points. Under regularity assumptions on the performative loss, PerfGD is the first algorithm which provably converges to an optimal point. PerfGD explicitly captures how changes in the model affects the data distribution and is simple to use. We support our findings with theory and experiments.
APA
Izzo, Z., Ying, L. & Zou, J.. (2021). How to Learn when Data Reacts to Your Model: Performative Gradient Descent. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4641-4650 Available from https://proceedings.mlr.press/v139/izzo21a.html.

Related Material