Poisoning Generative Replay in Continual Learning to Promote Forgetting

Siteng Kang, Zhan Shi, Xinhua Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15769-15785, 2023.

Abstract

Generative models have grown into the workhorse of many state-of-the-art machine learning methods. However, their vulnerability under poisoning attacks has been largely understudied. In this work, we investigate this issue in the context of continual learning, where generative replayers are utilized to tackle catastrophic forgetting. By developing a novel customization of dirty-label input-aware backdoors to the online setting, our attacker manages to stealthily promote forgetting while retaining high accuracy at the current task and sustaining strong defenders. Our approach taps into an intriguing property of generative models, namely that they cannot well capture input-dependent triggers. Experiments on four standard datasets corroborate the poisoner’s effectiveness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kang23c, title = {Poisoning Generative Replay in Continual Learning to Promote Forgetting}, author = {Kang, Siteng and Shi, Zhan and Zhang, Xinhua}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {15769--15785}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kang23c/kang23c.pdf}, url = {https://proceedings.mlr.press/v202/kang23c.html}, abstract = {Generative models have grown into the workhorse of many state-of-the-art machine learning methods. However, their vulnerability under poisoning attacks has been largely understudied. In this work, we investigate this issue in the context of continual learning, where generative replayers are utilized to tackle catastrophic forgetting. By developing a novel customization of dirty-label input-aware backdoors to the online setting, our attacker manages to stealthily promote forgetting while retaining high accuracy at the current task and sustaining strong defenders. Our approach taps into an intriguing property of generative models, namely that they cannot well capture input-dependent triggers. Experiments on four standard datasets corroborate the poisoner’s effectiveness.} }
Endnote
%0 Conference Paper %T Poisoning Generative Replay in Continual Learning to Promote Forgetting %A Siteng Kang %A Zhan Shi %A Xinhua Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kang23c %I PMLR %P 15769--15785 %U https://proceedings.mlr.press/v202/kang23c.html %V 202 %X Generative models have grown into the workhorse of many state-of-the-art machine learning methods. However, their vulnerability under poisoning attacks has been largely understudied. In this work, we investigate this issue in the context of continual learning, where generative replayers are utilized to tackle catastrophic forgetting. By developing a novel customization of dirty-label input-aware backdoors to the online setting, our attacker manages to stealthily promote forgetting while retaining high accuracy at the current task and sustaining strong defenders. Our approach taps into an intriguing property of generative models, namely that they cannot well capture input-dependent triggers. Experiments on four standard datasets corroborate the poisoner’s effectiveness.
APA
Kang, S., Shi, Z. & Zhang, X.. (2023). Poisoning Generative Replay in Continual Learning to Promote Forgetting. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:15769-15785 Available from https://proceedings.mlr.press/v202/kang23c.html.

Related Material