[edit]
Improved Dimensionality Dependence for Zeroth-Order Optimisation over Cross-Polytopes
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:44413-44428, 2024.
Abstract
This work proposes an algorithm improving the dimensionality dependence for gradient-free optimisation over cross-polytopes, which has many applications such as adversarial attacks, explainable AI and sparse regression. For bandit convex optimisation with two-point feedback over cross-polytopes, the state-of-the-art algorithms have a dimensionality dependence of O(√dlogd), while the known lower bound is of the form Ω(√d(logd)−1). We propose a mirror descent algorithm equipped with a symmetric version of the negative 12-Tsallis entropy. Combined with an ℓ1-ellipsoidal smoothing-based gradient estimator, the proposed algorithm guarantees a dimensionality dependence on O(√d), which improves the state-of-the-art algorithms by a factor of √logd. The idea can be further applied to optimising non-smooth and non-convex functions. We propose an algorithm with a convergence depending on O(d), which is the best-known dimensionality dependence.