Run-off Election: Improved Provable Defense against Data Poisoning Attacks

Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:29030-29050, 2023.

Abstract

In data poisoning attacks, an adversary tries to change a model’s prediction by adding, modifying, or removing samples in the training data. Recently, ensemble-based approaches for obtaining provable defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose Run-Off Election (ROE), a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, Run-Off election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to $3%$-$4%$. Also, by applying ROE on a boosted version of DPA, we gain improvements around $12%$-$27%$ comparing to the current state-of-the-art, establishing a new state-of-the-art in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-rezaei23a, title = {Run-off Election: Improved Provable Defense against Data Poisoning Attacks}, author = {Rezaei, Keivan and Banihashem, Kiarash and Chegini, Atoosa and Feizi, Soheil}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {29030--29050}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/rezaei23a/rezaei23a.pdf}, url = {https://proceedings.mlr.press/v202/rezaei23a.html}, abstract = {In data poisoning attacks, an adversary tries to change a model’s prediction by adding, modifying, or removing samples in the training data. Recently, ensemble-based approaches for obtaining provable defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose Run-Off Election (ROE), a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, Run-Off election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to $3%$-$4%$. Also, by applying ROE on a boosted version of DPA, we gain improvements around $12%$-$27%$ comparing to the current state-of-the-art, establishing a new state-of-the-art in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.} }
Endnote
%0 Conference Paper %T Run-off Election: Improved Provable Defense against Data Poisoning Attacks %A Keivan Rezaei %A Kiarash Banihashem %A Atoosa Chegini %A Soheil Feizi %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-rezaei23a %I PMLR %P 29030--29050 %U https://proceedings.mlr.press/v202/rezaei23a.html %V 202 %X In data poisoning attacks, an adversary tries to change a model’s prediction by adding, modifying, or removing samples in the training data. Recently, ensemble-based approaches for obtaining provable defenses against data poisoning have been proposed where predictions are done by taking a majority vote across multiple base models. In this work, we show that merely considering the majority vote in ensemble defenses is wasteful as it does not effectively utilize available information in the logits layers of the base models. Instead, we propose Run-Off Election (ROE), a novel aggregation method based on a two-round election across the base models: In the first round, models vote for their preferred class and then a second, Run-Off election is held between the top two classes in the first round. Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work. We evaluate our methods on MNIST, CIFAR-10, and GTSRB and obtain improvements in certified accuracy by up to $3%$-$4%$. Also, by applying ROE on a boosted version of DPA, we gain improvements around $12%$-$27%$ comparing to the current state-of-the-art, establishing a new state-of-the-art in (pointwise) certified robustness against data poisoning. In many cases, our approach outperforms the state-of-the-art, even when using 32 times less computational power.
APA
Rezaei, K., Banihashem, K., Chegini, A. & Feizi, S.. (2023). Run-off Election: Improved Provable Defense against Data Poisoning Attacks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:29030-29050 Available from https://proceedings.mlr.press/v202/rezaei23a.html.

Related Material