Error bounds for sparse classifiers in highdimensions
[edit]
Proceedings of Machine Learning Research, PMLR 89:4856, 2019.
Abstract
We prove an L2 recovery bound for a family of sparse estimators defined as minimizers of some empirical loss functions – which include hinge loss and logistic loss. More precisely, we achieve an upperbound for coefficients estimation scaling as $(k\ast/n)\log(p/k\ast)$: n,p is the size of the design matrix and k* the dimension of the theoretical loss minimizer. This is done under standard assumptions, for which we derive stronger versions of a cone condition and a restricted strong convexity. Our bound holds with high probability and in expectation and applies to an L1regularized estimator and to a recently introduced Slope estimator, which we generalize for classification problems. Slope presents the advantage of adapting to unknown sparsity. Thus, we propose a tractable proximal algorithm to compute it and assess its empirical performance. Our results match the best existing bounds for classification and regression problems.
Related Material


