Adversarial Policies Beat Superhuman Go AIs

Tony Tong Wang, Adam Gleave, Tom Tseng, Kellin Pelrine, Nora Belrose, Joseph Miller, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, Stuart Russell
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:35655-35739, 2023.

Abstract

We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it, achieving a $>$97% win rate against KataGo running at superhuman settings. Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders. Our attack transfers zero-shot to other superhuman Go-playing AIs, and is comprehensible to the extent that human experts can implement it without algorithmic assistance to consistently beat superhuman AIs. The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack. Our results demonstrate that even superhuman AI systems may harbor surprising failure modes. Example games are available https://goattack.far.ai/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-wang23g, title = {Adversarial Policies Beat Superhuman Go {AI}s}, author = {Wang, Tony Tong and Gleave, Adam and Tseng, Tom and Pelrine, Kellin and Belrose, Nora and Miller, Joseph and Dennis, Michael D and Duan, Yawen and Pogrebniak, Viktor and Levine, Sergey and Russell, Stuart}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {35655--35739}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/wang23g/wang23g.pdf}, url = {https://proceedings.mlr.press/v202/wang23g.html}, abstract = {We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it, achieving a $>$97% win rate against KataGo running at superhuman settings. Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders. Our attack transfers zero-shot to other superhuman Go-playing AIs, and is comprehensible to the extent that human experts can implement it without algorithmic assistance to consistently beat superhuman AIs. The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack. Our results demonstrate that even superhuman AI systems may harbor surprising failure modes. Example games are available https://goattack.far.ai/.} }
Endnote
%0 Conference Paper %T Adversarial Policies Beat Superhuman Go AIs %A Tony Tong Wang %A Adam Gleave %A Tom Tseng %A Kellin Pelrine %A Nora Belrose %A Joseph Miller %A Michael D Dennis %A Yawen Duan %A Viktor Pogrebniak %A Sergey Levine %A Stuart Russell %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-wang23g %I PMLR %P 35655--35739 %U https://proceedings.mlr.press/v202/wang23g.html %V 202 %X We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it, achieving a $>$97% win rate against KataGo running at superhuman settings. Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders. Our attack transfers zero-shot to other superhuman Go-playing AIs, and is comprehensible to the extent that human experts can implement it without algorithmic assistance to consistently beat superhuman AIs. The core vulnerability uncovered by our attack persists even in KataGo agents adversarially trained to defend against our attack. Our results demonstrate that even superhuman AI systems may harbor surprising failure modes. Example games are available https://goattack.far.ai/.
APA
Wang, T.T., Gleave, A., Tseng, T., Pelrine, K., Belrose, N., Miller, J., Dennis, M.D., Duan, Y., Pogrebniak, V., Levine, S. & Russell, S.. (2023). Adversarial Policies Beat Superhuman Go AIs. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:35655-35739 Available from https://proceedings.mlr.press/v202/wang23g.html.

Related Material