Stein Variational Evolution Strategies

Cornelius V. Braun, Robert Tjarko Lange, Marc Toussaint
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:398-420, 2025.

Abstract

Efficient global optimization and sampling are fundamental challenges, particularly in fields such as robotics and reinforcement learning, where gradients may be unavailable or unreliable. In this context, jointly optimizing multiple solutions is a promising approach to avoid local optima. While Stein Variational Gradient Descent (SVGD) provides a powerful framework for sampling diverse solutions, its reliance on first-order information limits its applicability to differentiable objectives. Existing gradient-free SVGD variants often suffer from slow convergence, and poor scalability. To improve gradient-free sampling and optimization, we propose Stein Variational CMA-ES, a novel gradient-free SVGD-like method that combines the efficiency of evolution strategies with SVGD-based repulsion forces. We perform an extensive empirical evaluation across several domains, which shows that the integration of the ES update in SVGD significantly improves the performance on multiple challenging benchmark problems. Our findings establish SV-CMA-ES as a scalable method for zero-order sampling and blackbox optimization, bridging the gap between SVGD and evolution strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-braun25a, title = {Stein Variational Evolution Strategies}, author = {Braun, Cornelius V. and Lange, Robert Tjarko and Toussaint, Marc}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {398--420}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/braun25a/braun25a.pdf}, url = {https://proceedings.mlr.press/v286/braun25a.html}, abstract = {Efficient global optimization and sampling are fundamental challenges, particularly in fields such as robotics and reinforcement learning, where gradients may be unavailable or unreliable. In this context, jointly optimizing multiple solutions is a promising approach to avoid local optima. While Stein Variational Gradient Descent (SVGD) provides a powerful framework for sampling diverse solutions, its reliance on first-order information limits its applicability to differentiable objectives. Existing gradient-free SVGD variants often suffer from slow convergence, and poor scalability. To improve gradient-free sampling and optimization, we propose Stein Variational CMA-ES, a novel gradient-free SVGD-like method that combines the efficiency of evolution strategies with SVGD-based repulsion forces. We perform an extensive empirical evaluation across several domains, which shows that the integration of the ES update in SVGD significantly improves the performance on multiple challenging benchmark problems. Our findings establish SV-CMA-ES as a scalable method for zero-order sampling and blackbox optimization, bridging the gap between SVGD and evolution strategies.} }
Endnote
%0 Conference Paper %T Stein Variational Evolution Strategies %A Cornelius V. Braun %A Robert Tjarko Lange %A Marc Toussaint %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-braun25a %I PMLR %P 398--420 %U https://proceedings.mlr.press/v286/braun25a.html %V 286 %X Efficient global optimization and sampling are fundamental challenges, particularly in fields such as robotics and reinforcement learning, where gradients may be unavailable or unreliable. In this context, jointly optimizing multiple solutions is a promising approach to avoid local optima. While Stein Variational Gradient Descent (SVGD) provides a powerful framework for sampling diverse solutions, its reliance on first-order information limits its applicability to differentiable objectives. Existing gradient-free SVGD variants often suffer from slow convergence, and poor scalability. To improve gradient-free sampling and optimization, we propose Stein Variational CMA-ES, a novel gradient-free SVGD-like method that combines the efficiency of evolution strategies with SVGD-based repulsion forces. We perform an extensive empirical evaluation across several domains, which shows that the integration of the ES update in SVGD significantly improves the performance on multiple challenging benchmark problems. Our findings establish SV-CMA-ES as a scalable method for zero-order sampling and blackbox optimization, bridging the gap between SVGD and evolution strategies.
APA
Braun, C.V., Lange, R.T. & Toussaint, M.. (2025). Stein Variational Evolution Strategies. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:398-420 Available from https://proceedings.mlr.press/v286/braun25a.html.

Related Material