[edit]
Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6643-6670, 2023.
Abstract
We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a (δ,ϵ)-stationary point from O(ϵ−4δ−1) stochastic gradient queries to O(ϵ−3δ−1), which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of O(ϵ−1.5δ−0.5). Our improved non-smooth analysis also immediately recovers all optimal or best-known results for finding ϵ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings.