Efficient displacement convex optimization with particle gradient descent

Hadi Daneshmand, Jason D. Lee, Chi Jin
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6836-6854, 2023.

Abstract

Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are displacement convex in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $R^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ iterations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. An application of our results proves the conjecture of no optimization-barrier up to permutation invariance, proposed by Entezari et al. (2022), for specific two-layer neural networks with two-dimensional inputs uniformly drawn from unit circle.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-daneshmand23a, title = {Efficient displacement convex optimization with particle gradient descent}, author = {Daneshmand, Hadi and Lee, Jason D. and Jin, Chi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {6836--6854}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/daneshmand23a/daneshmand23a.pdf}, url = {https://proceedings.mlr.press/v202/daneshmand23a.html}, abstract = {Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are displacement convex in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $R^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ iterations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. An application of our results proves the conjecture of no optimization-barrier up to permutation invariance, proposed by Entezari et al. (2022), for specific two-layer neural networks with two-dimensional inputs uniformly drawn from unit circle.} }
Endnote
%0 Conference Paper %T Efficient displacement convex optimization with particle gradient descent %A Hadi Daneshmand %A Jason D. Lee %A Chi Jin %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-daneshmand23a %I PMLR %P 6836--6854 %U https://proceedings.mlr.press/v202/daneshmand23a.html %V 202 %X Particle gradient descent, which uses particles to represent a probability measure and performs gradient descent on particles in parallel, is widely used to optimize functions of probability measures. This paper considers particle gradient descent with a finite number of particles and establishes its theoretical guarantees to optimize functions that are displacement convex in measures. Concretely, for Lipschitz displacement convex functions defined on probability over $R^d$, we prove that $O(1/\epsilon^2)$ particles and $O(d/\epsilon^4)$ iterations are sufficient to find the $\epsilon$-optimal solutions. We further provide improved complexity bounds for optimizing smooth displacement convex functions. An application of our results proves the conjecture of no optimization-barrier up to permutation invariance, proposed by Entezari et al. (2022), for specific two-layer neural networks with two-dimensional inputs uniformly drawn from unit circle.
APA
Daneshmand, H., Lee, J.D. & Jin, C.. (2023). Efficient displacement convex optimization with particle gradient descent. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:6836-6854 Available from https://proceedings.mlr.press/v202/daneshmand23a.html.

Related Material