PAC-Learning for Strategic Classification

Ravi Sundaram, Anil Vullikanti, Haifeng Xu, Fan Yao
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9978-9988, 2021.

Abstract

The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or always equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. (2018). We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) it’s computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in (Cullina et al., 2018).

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-sundaram21a, title = {PAC-Learning for Strategic Classification}, author = {Sundaram, Ravi and Vullikanti, Anil and Xu, Haifeng and Yao, Fan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9978--9988}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/sundaram21a/sundaram21a.pdf}, url = {https://proceedings.mlr.press/v139/sundaram21a.html}, abstract = {The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or always equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. (2018). We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) it’s computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in (Cullina et al., 2018).} }
Endnote
%0 Conference Paper %T PAC-Learning for Strategic Classification %A Ravi Sundaram %A Anil Vullikanti %A Haifeng Xu %A Fan Yao %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-sundaram21a %I PMLR %P 9978--9988 %U https://proceedings.mlr.press/v139/sundaram21a.html %V 139 %X The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or always equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. (2018). We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) it’s computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in (Cullina et al., 2018).
APA
Sundaram, R., Vullikanti, A., Xu, H. & Yao, F.. (2021). PAC-Learning for Strategic Classification. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9978-9988 Available from https://proceedings.mlr.press/v139/sundaram21a.html.

Related Material