[edit]
Error bounds for sparse classifiers in high-dimensions
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:48-56, 2019.
Abstract
We prove an L2 recovery bound for a family of sparse estimators defined as minimizers of some empirical loss functions – which include hinge loss and logistic loss. More precisely, we achieve an upper-bound for coefficients estimation scaling as $(k\ast/n)\log(p/k\ast)$: n,p is the size of the design matrix and k* the dimension of the theoretical loss minimizer. This is done under standard assumptions, for which we derive stronger versions of a cone condition and a restricted strong convexity. Our bound holds with high probability and in expectation and applies to an L1-regularized estimator and to a recently introduced Slope estimator, which we generalize for classification problems. Slope presents the advantage of adapting to unknown sparsity. Thus, we propose a tractable proximal algorithm to compute it and assess its empirical performance. Our results match the best existing bounds for classification and regression problems.