Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions

Yuya Hikima, Hiroshi Sawada, Akinori Fujino
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:23235-23260, 2025.

Abstract

In this study, we tackle an optimization problem with a known function and an unknown decision-dependent distribution, which arises in a variety of applications and is often referred to as a performative prediction problem. To solve the problem, several zeroth-order methods have been developed because the gradient of the objective function cannot be obtained explicitly due to the unknown distribution. Although these methods have theoretical convergence, they cannot utilize the information on the known function, which limits their efficiency in reducing the objective value. To overcome this issue, we propose new zeroth-order methods that generate effective update directions by utilizing information on the known function. As theoretical results, we show the convergence of our methods to stationary points and provide the worst-case sample complexity analysis. Our simulation experiments on multiple applications show that our methods output solutions with lower objective values than the existing zeroth-order methods do.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-hikima25a, title = {Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions}, author = {Hikima, Yuya and Sawada, Hiroshi and Fujino, Akinori}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {23235--23260}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/hikima25a/hikima25a.pdf}, url = {https://proceedings.mlr.press/v267/hikima25a.html}, abstract = {In this study, we tackle an optimization problem with a known function and an unknown decision-dependent distribution, which arises in a variety of applications and is often referred to as a performative prediction problem. To solve the problem, several zeroth-order methods have been developed because the gradient of the objective function cannot be obtained explicitly due to the unknown distribution. Although these methods have theoretical convergence, they cannot utilize the information on the known function, which limits their efficiency in reducing the objective value. To overcome this issue, we propose new zeroth-order methods that generate effective update directions by utilizing information on the known function. As theoretical results, we show the convergence of our methods to stationary points and provide the worst-case sample complexity analysis. Our simulation experiments on multiple applications show that our methods output solutions with lower objective values than the existing zeroth-order methods do.} }
Endnote
%0 Conference Paper %T Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions %A Yuya Hikima %A Hiroshi Sawada %A Akinori Fujino %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-hikima25a %I PMLR %P 23235--23260 %U https://proceedings.mlr.press/v267/hikima25a.html %V 267 %X In this study, we tackle an optimization problem with a known function and an unknown decision-dependent distribution, which arises in a variety of applications and is often referred to as a performative prediction problem. To solve the problem, several zeroth-order methods have been developed because the gradient of the objective function cannot be obtained explicitly due to the unknown distribution. Although these methods have theoretical convergence, they cannot utilize the information on the known function, which limits their efficiency in reducing the objective value. To overcome this issue, we propose new zeroth-order methods that generate effective update directions by utilizing information on the known function. As theoretical results, we show the convergence of our methods to stationary points and provide the worst-case sample complexity analysis. Our simulation experiments on multiple applications show that our methods output solutions with lower objective values than the existing zeroth-order methods do.
APA
Hikima, Y., Sawada, H. & Fujino, A.. (2025). Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:23235-23260 Available from https://proceedings.mlr.press/v267/hikima25a.html.

Related Material