Interpreting Robust Optimization via Adversarial Influence Functions

Zhun Deng, Cynthia Dwork, Jialiang Wang, Linjun Zhang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2464-2473, 2020.

Abstract

Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity — a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-deng20a, title = {Interpreting Robust Optimization via Adversarial Influence Functions}, author = {Deng, Zhun and Dwork, Cynthia and Wang, Jialiang and Zhang, Linjun}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2464--2473}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/deng20a/deng20a.pdf}, url = {https://proceedings.mlr.press/v119/deng20a.html}, abstract = {Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity — a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.} }
Endnote
%0 Conference Paper %T Interpreting Robust Optimization via Adversarial Influence Functions %A Zhun Deng %A Cynthia Dwork %A Jialiang Wang %A Linjun Zhang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-deng20a %I PMLR %P 2464--2473 %U https://proceedings.mlr.press/v119/deng20a.html %V 119 %X Robust optimization has been widely used in nowadays data science, especially in adversarial training. However, little research has been done to quantify how robust optimization changes the optimizers and the prediction losses comparing to standard training. In this paper, inspired by the influence function in robust statistics, we introduce the Adversarial Influence Function (AIF) as a tool to investigate the solution produced by robust optimization. The proposed AIF enjoys a closed-form and can be calculated efficiently. To illustrate the usage of AIF, we apply it to study model sensitivity — a quantity defined to capture the change of prediction losses on the natural data after implementing robust optimization. We use AIF to analyze how model complexity and randomized smoothing affect the model sensitivity with respect to specific models. We further derive AIF for kernel regressions, with a particular application to neural tangent kernels, and experimentally demonstrate the effectiveness of the proposed AIF. Lastly, the theories of AIF will be extended to distributional robust optimization.
APA
Deng, Z., Dwork, C., Wang, J. & Zhang, L.. (2020). Interpreting Robust Optimization via Adversarial Influence Functions. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2464-2473 Available from https://proceedings.mlr.press/v119/deng20a.html.

Related Material