[edit]
Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:1253-1262, 2020.
Abstract
Non-convex and non-smooth optimization problems are important for statistics and machine learning. However, solving such problems is always challenging. In this paper, we propose fast proximal gradient descent based methods to solve a class of non-convex and non-smooth sparse learning problems, i.e. the $\ell^0$ regularization problems. We prove improved convergence rate of proximal gradient descent on the $\ell^0$ regularization problems, and propose two accelerated versions by support projection. The proposed accelerated proximal gradient descent methods by support projection have convergence rates which match the Nesterov’s optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient. Experimental results demonstrate the effectiveness of the proposed algorithms. We also propose feedforward neural networks as fast encoders to approximate the optimization results generated by the proposed accelerated algorithms.