Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems

Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:26266-26282, 2023.

Abstract

The entropic fictitious play (EFP) is a recently proposed algorithm that minimizes the sum of a convex functional and entropy in the space of measures — such an objective naturally arises in the optimization of a two-layer neural network in the mean-field regime. In this work, we provide a concise primal-dual analysis of EFP in the setting where the learning problem exhibits a finite-sum structure. We establish quantitative global convergence guarantees for both the continuous-time and discrete-time dynamics based on properties of a proximal Gibbs measure introduced in Nitanda et al. (2022). Furthermore, our primal-dual framework entails a memory-efficient particle-based implementation of the EFP update, and also suggests a connection to gradient boosting methods. We illustrate the efficiency of our novel implementation in experiments including neural network optimization and image synthesis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-nitanda23a, title = {Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems}, author = {Nitanda, Atsushi and Oko, Kazusato and Wu, Denny and Takenouchi, Nobuhito and Suzuki, Taiji}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {26266--26282}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/nitanda23a/nitanda23a.pdf}, url = {https://proceedings.mlr.press/v202/nitanda23a.html}, abstract = {The entropic fictitious play (EFP) is a recently proposed algorithm that minimizes the sum of a convex functional and entropy in the space of measures — such an objective naturally arises in the optimization of a two-layer neural network in the mean-field regime. In this work, we provide a concise primal-dual analysis of EFP in the setting where the learning problem exhibits a finite-sum structure. We establish quantitative global convergence guarantees for both the continuous-time and discrete-time dynamics based on properties of a proximal Gibbs measure introduced in Nitanda et al. (2022). Furthermore, our primal-dual framework entails a memory-efficient particle-based implementation of the EFP update, and also suggests a connection to gradient boosting methods. We illustrate the efficiency of our novel implementation in experiments including neural network optimization and image synthesis.} }
Endnote
%0 Conference Paper %T Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems %A Atsushi Nitanda %A Kazusato Oko %A Denny Wu %A Nobuhito Takenouchi %A Taiji Suzuki %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-nitanda23a %I PMLR %P 26266--26282 %U https://proceedings.mlr.press/v202/nitanda23a.html %V 202 %X The entropic fictitious play (EFP) is a recently proposed algorithm that minimizes the sum of a convex functional and entropy in the space of measures — such an objective naturally arises in the optimization of a two-layer neural network in the mean-field regime. In this work, we provide a concise primal-dual analysis of EFP in the setting where the learning problem exhibits a finite-sum structure. We establish quantitative global convergence guarantees for both the continuous-time and discrete-time dynamics based on properties of a proximal Gibbs measure introduced in Nitanda et al. (2022). Furthermore, our primal-dual framework entails a memory-efficient particle-based implementation of the EFP update, and also suggests a connection to gradient boosting methods. We illustrate the efficiency of our novel implementation in experiments including neural network optimization and image synthesis.
APA
Nitanda, A., Oko, K., Wu, D., Takenouchi, N. & Suzuki, T.. (2023). Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:26266-26282 Available from https://proceedings.mlr.press/v202/nitanda23a.html.

Related Material