[edit]
Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:23235-23260, 2025.
Abstract
In this study, we tackle an optimization problem with a known function and an unknown decision-dependent distribution, which arises in a variety of applications and is often referred to as a performative prediction problem. To solve the problem, several zeroth-order methods have been developed because the gradient of the objective function cannot be obtained explicitly due to the unknown distribution. Although these methods have theoretical convergence, they cannot utilize the information on the known function, which limits their efficiency in reducing the objective value. To overcome this issue, we propose new zeroth-order methods that generate effective update directions by utilizing information on the known function. As theoretical results, we show the convergence of our methods to stationary points and provide the worst-case sample complexity analysis. Our simulation experiments on multiple applications show that our methods output solutions with lower objective values than the existing zeroth-order methods do.