Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion

William Cole Franks, Ankur Moitra
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1601-1632, 2020.

Abstract

Estimating the shape of an elliptical distribution is a fundamental problem in statistics. One estimator for the shape matrix, Tyler’s M-estimator, has been shown to have many appealing asymptotic properties. It performs well in numerical experiments and can be quickly computed in practice by a simple iterative procedure. Despite the many years the estimator has been studied in the statistics community, there was neither a non-asymptotic bound on the rate of the estimator nor a proof that the iterative procedure converges in polynomially many steps. Here we observe a surprising connection between Tyler’s M-estimator and operator scaling, which has been intensively studied in recent years in part because of its connections to the Brascamp-Lieb inequality in analysis. We use this connection, together with novel results on quantum expanders, to show that Tyler’s M-estimator has the optimal rate up to factors logarithmic in the dimension, and that in the generative model the iterative procedure has a linear convergence rate even without regularization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-franks20a, title = {Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion}, author = {Franks, William Cole and Moitra, Ankur}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {1601--1632}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/franks20a/franks20a.pdf}, url = {https://proceedings.mlr.press/v125/franks20a.html}, abstract = { Estimating the shape of an elliptical distribution is a fundamental problem in statistics. One estimator for the shape matrix, Tyler’s M-estimator, has been shown to have many appealing asymptotic properties. It performs well in numerical experiments and can be quickly computed in practice by a simple iterative procedure. Despite the many years the estimator has been studied in the statistics community, there was neither a non-asymptotic bound on the rate of the estimator nor a proof that the iterative procedure converges in polynomially many steps. Here we observe a surprising connection between Tyler’s M-estimator and operator scaling, which has been intensively studied in recent years in part because of its connections to the Brascamp-Lieb inequality in analysis. We use this connection, together with novel results on quantum expanders, to show that Tyler’s M-estimator has the optimal rate up to factors logarithmic in the dimension, and that in the generative model the iterative procedure has a linear convergence rate even without regularization.} }
Endnote
%0 Conference Paper %T Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion %A William Cole Franks %A Ankur Moitra %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-franks20a %I PMLR %P 1601--1632 %U https://proceedings.mlr.press/v125/franks20a.html %V 125 %X Estimating the shape of an elliptical distribution is a fundamental problem in statistics. One estimator for the shape matrix, Tyler’s M-estimator, has been shown to have many appealing asymptotic properties. It performs well in numerical experiments and can be quickly computed in practice by a simple iterative procedure. Despite the many years the estimator has been studied in the statistics community, there was neither a non-asymptotic bound on the rate of the estimator nor a proof that the iterative procedure converges in polynomially many steps. Here we observe a surprising connection between Tyler’s M-estimator and operator scaling, which has been intensively studied in recent years in part because of its connections to the Brascamp-Lieb inequality in analysis. We use this connection, together with novel results on quantum expanders, to show that Tyler’s M-estimator has the optimal rate up to factors logarithmic in the dimension, and that in the generative model the iterative procedure has a linear convergence rate even without regularization.
APA
Franks, W.C. & Moitra, A.. (2020). Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:1601-1632 Available from https://proceedings.mlr.press/v125/franks20a.html.

Related Material