Non-Uniform Adversarially Robust Pruning

Qi Zhao, Tim Königl, Christian Wressnegger
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:1/1-16, 2022.

Abstract

Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, all weights or channels are compressed equally according to a preset compression ratio. In this paper, we show that employing non-uniform compression strategies allows to significantly improve clean data accuracy as well as adversarial robustness under high overall compression. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v188-zhao22a, title = {Non-Uniform Adversarially Robust Pruning}, author = {Zhao, Qi and K\"onigl, Tim and Wressnegger, Christian}, booktitle = {Proceedings of the First International Conference on Automated Machine Learning}, pages = {1/1--16}, year = {2022}, editor = {Guyon, Isabelle and Lindauer, Marius and van der Schaar, Mihaela and Hutter, Frank and Garnett, Roman}, volume = {188}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v188/zhao22a/zhao22a.pdf}, url = {https://proceedings.mlr.press/v188/zhao22a.html}, abstract = {Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, all weights or channels are compressed equally according to a preset compression ratio. In this paper, we show that employing non-uniform compression strategies allows to significantly improve clean data accuracy as well as adversarial robustness under high overall compression. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.} }
Endnote
%0 Conference Paper %T Non-Uniform Adversarially Robust Pruning %A Qi Zhao %A Tim Königl %A Christian Wressnegger %B Proceedings of the First International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Isabelle Guyon %E Marius Lindauer %E Mihaela van der Schaar %E Frank Hutter %E Roman Garnett %F pmlr-v188-zhao22a %I PMLR %P 1/1--16 %U https://proceedings.mlr.press/v188/zhao22a.html %V 188 %X Neural networks often are highly redundant and can thus be effectively compressed to a fraction of their initial size using model pruning techniques without harming the overall prediction accuracy. Additionally, pruned networks need to maintain robustness against attacks such as adversarial examples. Recent research on combining all these objectives has shown significant advances using uniform compression strategies, that is, all weights or channels are compressed equally according to a preset compression ratio. In this paper, we show that employing non-uniform compression strategies allows to significantly improve clean data accuracy as well as adversarial robustness under high overall compression. We leverage reinforcement learning for finding an optimal trade-off and demonstrate that the resulting compression strategy can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.
APA
Zhao, Q., Königl, T. & Wressnegger, C.. (2022). Non-Uniform Adversarially Robust Pruning. Proceedings of the First International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 188:1/1-16 Available from https://proceedings.mlr.press/v188/zhao22a.html.

Related Material