Strategic Classification in the Dark

Ganesh Ghalme, Vineet Nair, Itay Eilat, Inbal Talgam-Cohen, Nir Rosenfeld
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3672-3681, 2021.

Abstract

Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Agents respond by manipulating their features, under the assumption that the classifier is known. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios and analyze the effect of an unknown classifier. We define the ”price of opacity” as the difference between the prediction error under the opaque and transparent policies, characterize it, and give a sufficient condition for it to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.’s robust classifier is affected by keeping agents in the dark.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ghalme21a, title = {Strategic Classification in the Dark}, author = {Ghalme, Ganesh and Nair, Vineet and Eilat, Itay and Talgam-Cohen, Inbal and Rosenfeld, Nir}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3672--3681}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ghalme21a/ghalme21a.pdf}, url = {https://proceedings.mlr.press/v139/ghalme21a.html}, abstract = {Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Agents respond by manipulating their features, under the assumption that the classifier is known. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios and analyze the effect of an unknown classifier. We define the ”price of opacity” as the difference between the prediction error under the opaque and transparent policies, characterize it, and give a sufficient condition for it to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.’s robust classifier is affected by keeping agents in the dark.} }
Endnote
%0 Conference Paper %T Strategic Classification in the Dark %A Ganesh Ghalme %A Vineet Nair %A Itay Eilat %A Inbal Talgam-Cohen %A Nir Rosenfeld %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ghalme21a %I PMLR %P 3672--3681 %U https://proceedings.mlr.press/v139/ghalme21a.html %V 139 %X Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Agents respond by manipulating their features, under the assumption that the classifier is known. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios and analyze the effect of an unknown classifier. We define the ”price of opacity” as the difference between the prediction error under the opaque and transparent policies, characterize it, and give a sufficient condition for it to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.’s robust classifier is affected by keeping agents in the dark.
APA
Ghalme, G., Nair, V., Eilat, I., Talgam-Cohen, I. & Rosenfeld, N.. (2021). Strategic Classification in the Dark. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3672-3681 Available from https://proceedings.mlr.press/v139/ghalme21a.html.

Related Material