Subsampling for Ridge Regression via Regularized Volume Sampling

Michal Derezinski, Manfred Warmuth
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:716-725, 2018.

Abstract

Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-derezinski18a, title = {Subsampling for Ridge Regression via Regularized Volume Sampling}, author = {Derezinski, Michal and Warmuth, Manfred}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {716--725}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/derezinski18a/derezinski18a.pdf}, url = {https://proceedings.mlr.press/v84/derezinski18a.html}, abstract = {Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.} }
Endnote
%0 Conference Paper %T Subsampling for Ridge Regression via Regularized Volume Sampling %A Michal Derezinski %A Manfred Warmuth %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-derezinski18a %I PMLR %P 716--725 %U https://proceedings.mlr.press/v84/derezinski18a.html %V 84 %X Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.
APA
Derezinski, M. & Warmuth, M.. (2018). Subsampling for Ridge Regression via Regularized Volume Sampling. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:716-725 Available from https://proceedings.mlr.press/v84/derezinski18a.html.

Related Material