Stochastic Mirror Descent for Large-Scale Sparse Recovery

Sasila Ilandarideva, Yannis Bekri, Anatoli Iouditski, Vianney Perchet
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5931-5957, 2023.

Abstract

We discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase. We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-ilandarideva23a, title = {Stochastic Mirror Descent for Large-Scale Sparse Recovery}, author = {Ilandarideva, Sasila and Bekri, Yannis and Iouditski, Anatoli and Perchet, Vianney}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {5931--5957}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/ilandarideva23a/ilandarideva23a.pdf}, url = {https://proceedings.mlr.press/v206/ilandarideva23a.html}, abstract = {We discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase. We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data.} }
Endnote
%0 Conference Paper %T Stochastic Mirror Descent for Large-Scale Sparse Recovery %A Sasila Ilandarideva %A Yannis Bekri %A Anatoli Iouditski %A Vianney Perchet %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-ilandarideva23a %I PMLR %P 5931--5957 %U https://proceedings.mlr.press/v206/ilandarideva23a.html %V 206 %X We discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase. We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data.
APA
Ilandarideva, S., Bekri, Y., Iouditski, A. & Perchet, V.. (2023). Stochastic Mirror Descent for Large-Scale Sparse Recovery. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:5931-5957 Available from https://proceedings.mlr.press/v206/ilandarideva23a.html.

Related Material