Minimax Linear Regression under the Quantile Risk

Ayoub El Hanchi, Chris Maddison, Murat Erdogdu
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1516-1572, 2024.

Abstract

We study the problem of designing minimax procedures in linear regression under the quantile risk. We start by considering the realizable setting with independent Gaussian noise, where for any given noise level and distribution of inputs, we obtain the \emph{exact} minimax quantile risk for a rich family of error functions and establish the minimaxity of OLS. This improves on the lower bounds obtained by Lecue and Mendelson (2016) and Mendelson (2017) for the special case of square error, and provides us with a lower bound on the minimax quantile risk over larger sets of distributions. Under the square error and a fourth moment assumption on the distribution of inputs, we show that this lower bound is tight over a larger class of problems. Specifically, we prove a matching upper bound on the worst-case quantile risk of a variant of the procedure proposed by Lecue and Lerasle (2020), thereby establishing its minimaxity, up to absolute constants. We illustrate the usefulness of our approach by extending this result to all $p$-th power error functions for $p \in (2, \infty)$. Along the way, we develop a generic analogue to the classical Bayesian method for lower bounding the minimax risk when working with the quantile risk, as well as a tight characterization of the quantiles of the smallest eigenvalue of the sample covariance matrix.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-el-hanchi24a, title = {Minimax Linear Regression under the Quantile Risk}, author = {El Hanchi, Ayoub and Maddison, Chris and Erdogdu, Murat}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {1516--1572}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/el-hanchi24a/el-hanchi24a.pdf}, url = {https://proceedings.mlr.press/v247/el-hanchi24a.html}, abstract = {We study the problem of designing minimax procedures in linear regression under the quantile risk. We start by considering the realizable setting with independent Gaussian noise, where for any given noise level and distribution of inputs, we obtain the \emph{exact} minimax quantile risk for a rich family of error functions and establish the minimaxity of OLS. This improves on the lower bounds obtained by Lecue and Mendelson (2016) and Mendelson (2017) for the special case of square error, and provides us with a lower bound on the minimax quantile risk over larger sets of distributions. Under the square error and a fourth moment assumption on the distribution of inputs, we show that this lower bound is tight over a larger class of problems. Specifically, we prove a matching upper bound on the worst-case quantile risk of a variant of the procedure proposed by Lecue and Lerasle (2020), thereby establishing its minimaxity, up to absolute constants. We illustrate the usefulness of our approach by extending this result to all $p$-th power error functions for $p \in (2, \infty)$. Along the way, we develop a generic analogue to the classical Bayesian method for lower bounding the minimax risk when working with the quantile risk, as well as a tight characterization of the quantiles of the smallest eigenvalue of the sample covariance matrix.} }
Endnote
%0 Conference Paper %T Minimax Linear Regression under the Quantile Risk %A Ayoub El Hanchi %A Chris Maddison %A Murat Erdogdu %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-el-hanchi24a %I PMLR %P 1516--1572 %U https://proceedings.mlr.press/v247/el-hanchi24a.html %V 247 %X We study the problem of designing minimax procedures in linear regression under the quantile risk. We start by considering the realizable setting with independent Gaussian noise, where for any given noise level and distribution of inputs, we obtain the \emph{exact} minimax quantile risk for a rich family of error functions and establish the minimaxity of OLS. This improves on the lower bounds obtained by Lecue and Mendelson (2016) and Mendelson (2017) for the special case of square error, and provides us with a lower bound on the minimax quantile risk over larger sets of distributions. Under the square error and a fourth moment assumption on the distribution of inputs, we show that this lower bound is tight over a larger class of problems. Specifically, we prove a matching upper bound on the worst-case quantile risk of a variant of the procedure proposed by Lecue and Lerasle (2020), thereby establishing its minimaxity, up to absolute constants. We illustrate the usefulness of our approach by extending this result to all $p$-th power error functions for $p \in (2, \infty)$. Along the way, we develop a generic analogue to the classical Bayesian method for lower bounding the minimax risk when working with the quantile risk, as well as a tight characterization of the quantiles of the smallest eigenvalue of the sample covariance matrix.
APA
El Hanchi, A., Maddison, C. & Erdogdu, M.. (2024). Minimax Linear Regression under the Quantile Risk. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:1516-1572 Available from https://proceedings.mlr.press/v247/el-hanchi24a.html.

Related Material