Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, Zico Kolter
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8230-8241, 2020.

Abstract

Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we present a unifying view of randomized smoothing over arbitrary functions, and we leverage this novel characterization to propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks. As a specific instantiation, we utilize our framework to build linear classifiers that are robust to a strong variant of label flipping, where each test example is targeted independently. In other words, for each test point, our classifier includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Randomized smoothing has previously been used to guarantee—with high probability—test-time robustness to adversarial manipulation of the input to a classifier; we derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with minimal additional runtime complexity over standard classification and no assumptions on the train or test distributions. We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-rosenfeld20b, title = {Certified Robustness to Label-Flipping Attacks via Randomized Smoothing}, author = {Rosenfeld, Elan and Winston, Ezra and Ravikumar, Pradeep and Kolter, Zico}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8230--8241}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/rosenfeld20b/rosenfeld20b.pdf}, url = {https://proceedings.mlr.press/v119/rosenfeld20b.html}, abstract = {Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we present a unifying view of randomized smoothing over arbitrary functions, and we leverage this novel characterization to propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks. As a specific instantiation, we utilize our framework to build linear classifiers that are robust to a strong variant of label flipping, where each test example is targeted independently. In other words, for each test point, our classifier includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Randomized smoothing has previously been used to guarantee—with high probability—test-time robustness to adversarial manipulation of the input to a classifier; we derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with minimal additional runtime complexity over standard classification and no assumptions on the train or test distributions. We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.} }
Endnote
%0 Conference Paper %T Certified Robustness to Label-Flipping Attacks via Randomized Smoothing %A Elan Rosenfeld %A Ezra Winston %A Pradeep Ravikumar %A Zico Kolter %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-rosenfeld20b %I PMLR %P 8230--8241 %U https://proceedings.mlr.press/v119/rosenfeld20b.html %V 119 %X Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. In this work, we present a unifying view of randomized smoothing over arbitrary functions, and we leverage this novel characterization to propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks. As a specific instantiation, we utilize our framework to build linear classifiers that are robust to a strong variant of label flipping, where each test example is targeted independently. In other words, for each test point, our classifier includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Randomized smoothing has previously been used to guarantee—with high probability—test-time robustness to adversarial manipulation of the input to a classifier; we derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with minimal additional runtime complexity over standard classification and no assumptions on the train or test distributions. We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.
APA
Rosenfeld, E., Winston, E., Ravikumar, P. & Kolter, Z.. (2020). Certified Robustness to Label-Flipping Attacks via Randomized Smoothing. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8230-8241 Available from https://proceedings.mlr.press/v119/rosenfeld20b.html.

Related Material