Robust and Private Learning of Halfspaces

Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thao Nguyen
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1603-1611, 2021.

Abstract

In this work, we study the trade-off between differential privacy and adversarial robustness under $L_2$-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-ghazi21a, title = { Robust and Private Learning of Halfspaces }, author = {Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Nguyen, Thao}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1603--1611}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/ghazi21a/ghazi21a.pdf}, url = {https://proceedings.mlr.press/v130/ghazi21a.html}, abstract = { In this work, we study the trade-off between differential privacy and adversarial robustness under $L_2$-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust. } }
Endnote
%0 Conference Paper %T Robust and Private Learning of Halfspaces %A Badih Ghazi %A Ravi Kumar %A Pasin Manurangsi %A Thao Nguyen %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-ghazi21a %I PMLR %P 1603--1611 %U https://proceedings.mlr.press/v130/ghazi21a.html %V 130 %X In this work, we study the trade-off between differential privacy and adversarial robustness under $L_2$-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.
APA
Ghazi, B., Kumar, R., Manurangsi, P. & Nguyen, T.. (2021). Robust and Private Learning of Halfspaces . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1603-1611 Available from https://proceedings.mlr.press/v130/ghazi21a.html.

Related Material