Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble

Atsushi Nitanda, Anzelle Lee, Damian Tan Xing Kai, Mizuki Sakaguchi, Taiji Suzuki
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:46586-46610, 2025.

Abstract

Mean-field Langevin dynamics (MFLD) is an optimization method derived by taking the mean-field limit of noisy gradient descent for two-layer neural networks in the mean-field regime. Recently, the propagation of chaos (PoC) for MFLD has gained attention as it provides a quantitative characterization of the optimization complexity in terms of the number of particles and iterations. A remarkable progress by Chen et al. (2022) showed that the approximation error due to finite particles remains uniform in time and diminishes as the number of particles increases. In this paper, by refining the defective log-Sobolev inequality—a key result from that earlier work—under the neural network training setting, we establish an improved PoC result for MFLD, which removes the exponential dependence on the regularization coefficient from the particle approximation term of the optimization complexity. As an application, we propose a PoC-based model ensemble strategy with theoretical guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-nitanda25a, title = {Propagation of Chaos for Mean-Field {L}angevin Dynamics and its Application to Model Ensemble}, author = {Nitanda, Atsushi and Lee, Anzelle and Kai, Damian Tan Xing and Sakaguchi, Mizuki and Suzuki, Taiji}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {46586--46610}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/nitanda25a/nitanda25a.pdf}, url = {https://proceedings.mlr.press/v267/nitanda25a.html}, abstract = {Mean-field Langevin dynamics (MFLD) is an optimization method derived by taking the mean-field limit of noisy gradient descent for two-layer neural networks in the mean-field regime. Recently, the propagation of chaos (PoC) for MFLD has gained attention as it provides a quantitative characterization of the optimization complexity in terms of the number of particles and iterations. A remarkable progress by Chen et al. (2022) showed that the approximation error due to finite particles remains uniform in time and diminishes as the number of particles increases. In this paper, by refining the defective log-Sobolev inequality—a key result from that earlier work—under the neural network training setting, we establish an improved PoC result for MFLD, which removes the exponential dependence on the regularization coefficient from the particle approximation term of the optimization complexity. As an application, we propose a PoC-based model ensemble strategy with theoretical guarantees.} }
Endnote
%0 Conference Paper %T Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble %A Atsushi Nitanda %A Anzelle Lee %A Damian Tan Xing Kai %A Mizuki Sakaguchi %A Taiji Suzuki %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-nitanda25a %I PMLR %P 46586--46610 %U https://proceedings.mlr.press/v267/nitanda25a.html %V 267 %X Mean-field Langevin dynamics (MFLD) is an optimization method derived by taking the mean-field limit of noisy gradient descent for two-layer neural networks in the mean-field regime. Recently, the propagation of chaos (PoC) for MFLD has gained attention as it provides a quantitative characterization of the optimization complexity in terms of the number of particles and iterations. A remarkable progress by Chen et al. (2022) showed that the approximation error due to finite particles remains uniform in time and diminishes as the number of particles increases. In this paper, by refining the defective log-Sobolev inequality—a key result from that earlier work—under the neural network training setting, we establish an improved PoC result for MFLD, which removes the exponential dependence on the regularization coefficient from the particle approximation term of the optimization complexity. As an application, we propose a PoC-based model ensemble strategy with theoretical guarantees.
APA
Nitanda, A., Lee, A., Kai, D.T.X., Sakaguchi, M. & Suzuki, T.. (2025). Propagation of Chaos for Mean-Field Langevin Dynamics and its Application to Model Ensemble. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:46586-46610 Available from https://proceedings.mlr.press/v267/nitanda25a.html.

Related Material