Fast Algorithms for Sparse ReducedRank Regression
[edit]
Proceedings of Machine Learning Research, PMLR 89:24152424, 2019.
Abstract
We consider a reformulation of ReducedRank Regression (RRR) and Sparse ReducedRank Regression (SRRR) as a nonconvex nondifferentiable function of a single of the two matrices usually introduced to parametrize lowrank matrix learning problems. We study the behavior of proximal gradient algorithms for the minimization of the objective. In particular, based on an analysis of the geometry of the problem, we establish that a proximal Polyak{Ł}ojasiewicz inequality is satisfied in a neighborhood of the set of optima under a condition on the regularization parameter. We can consequently derive linear convergence rates for the proximal gradient descent with line search and for related algorithms in a neighborhood of the optima. Our experiments show that our formulation leads to much faster learning algorithms for RRR and especially for SRRR.
Related Material


