Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:884-893, 2020.

Abstract

The fragility of deep neural networks to adversarially-chosen inputs has motivated the need to revisit deep learning algorithms. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. This mechanism can be formulated as a min-max optimization problem, where the adversary seeks to maximize the loss function using an iterative first-order algorithm while the learner attempts to minimize it. However, finding adversarial examples in this way causes excessive computational overhead during training. By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly. In this paper, we provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact oracle methods in optimization. Our analysis sheds light on how the hyperparameters of the algorithm affect the its stability and convergence. We support our insights with experiments on a robust classification problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-seidman20a, title = {Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees}, author = {Seidman, Jacob H. and Fazlyab, Mahyar and Preciado, Victor M. and Pappas, George J.}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {884--893}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/seidman20a/seidman20a.pdf}, url = {https://proceedings.mlr.press/v120/seidman20a.html}, abstract = {The fragility of deep neural networks to adversarially-chosen inputs has motivated the need to revisit deep learning algorithms. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. This mechanism can be formulated as a min-max optimization problem, where the adversary seeks to maximize the loss function using an iterative first-order algorithm while the learner attempts to minimize it. However, finding adversarial examples in this way causes excessive computational overhead during training. By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly. In this paper, we provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact oracle methods in optimization. Our analysis sheds light on how the hyperparameters of the algorithm affect the its stability and convergence. We support our insights with experiments on a robust classification problem.} }
Endnote
%0 Conference Paper %T Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees %A Jacob H. Seidman %A Mahyar Fazlyab %A Victor M. Preciado %A George J. Pappas %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-seidman20a %I PMLR %P 884--893 %U https://proceedings.mlr.press/v120/seidman20a.html %V 120 %X The fragility of deep neural networks to adversarially-chosen inputs has motivated the need to revisit deep learning algorithms. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. This mechanism can be formulated as a min-max optimization problem, where the adversary seeks to maximize the loss function using an iterative first-order algorithm while the learner attempts to minimize it. However, finding adversarial examples in this way causes excessive computational overhead during training. By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly. In this paper, we provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact oracle methods in optimization. Our analysis sheds light on how the hyperparameters of the algorithm affect the its stability and convergence. We support our insights with experiments on a robust classification problem.
APA
Seidman, J.H., Fazlyab, M., Preciado, V.M. & Pappas, G.J.. (2020). Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:884-893 Available from https://proceedings.mlr.press/v120/seidman20a.html.

Related Material