Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate

Yufeng Zhang, Qi Cai, Zhuoran Yang, Zhaoran Wang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11044-11054, 2020.

Abstract

Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20d, title = {Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate}, author = {Zhang, Yufeng and Cai, Qi and Yang, Zhuoran and Wang, Zhaoran}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11044--11054}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20d/zhang20d.pdf}, url = {https://proceedings.mlr.press/v119/zhang20d.html}, abstract = {Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time.} }
Endnote
%0 Conference Paper %T Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate %A Yufeng Zhang %A Qi Cai %A Zhuoran Yang %A Zhaoran Wang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20d %I PMLR %P 11044--11054 %U https://proceedings.mlr.press/v119/zhang20d.html %V 119 %X Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time.
APA
Zhang, Y., Cai, Q., Yang, Z. & Wang, Z.. (2020). Generative Adversarial Imitation Learning with Neural Network Parameterization: Global Optimality and Convergence Rate. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11044-11054 Available from https://proceedings.mlr.press/v119/zhang20d.html.

Related Material