On Compressive Ensemble Induced Regularisation: How Close is the Finite Ensemble Precision Matrix to the Infinite Ensemble?

Ata Kabán
Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:617-628, 2017.

Abstract

Averaging ensembles of randomly oriented low-dimensional projections of a singular covariance represent a novel and attractive means to obtain a well-conditioned inverse, which only needs access to random projections of the data. However, theoretical analyses so far have only been done at convergence, implying good properties for `large-enough' ensembles. But how large is `large enough'? Here we bound the expected difference in spectral norm between the finite ensemble precision matrix and the infinite ensemble, and based on this we give an estimate of the required ensemble size to guarantee the approximation error of the finite ensemble is below a given tolerance. Under mild assumptions, we find that for any given tolerance, the ensemble only needs to grow linearly in the original data dimension. A technical ingredient of our analysis is to upper bound the spectral norm of a matrix-variate T, which we then employ in conjunction with specific results from random matrix theory regarding the estimation of the covariance of random matrices.

Cite this Paper


BibTeX
@InProceedings{pmlr-v76-kabán17a, title = {On Compressive Ensemble Induced Regularisation: How Close is the Finite Ensemble Precision Matrix to the Infinite Ensemble?}, author = {Kabán, Ata}, booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory}, pages = {617--628}, year = {2017}, editor = {Hanneke, Steve and Reyzin, Lev}, volume = {76}, series = {Proceedings of Machine Learning Research}, month = {15--17 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v76/kabán17a/kabán17a.pdf}, url = {https://proceedings.mlr.press/v76/kab%C3%A1n17a.html}, abstract = {Averaging ensembles of randomly oriented low-dimensional projections of a singular covariance represent a novel and attractive means to obtain a well-conditioned inverse, which only needs access to random projections of the data. However, theoretical analyses so far have only been done at convergence, implying good properties for `large-enough' ensembles. But how large is `large enough'? Here we bound the expected difference in spectral norm between the finite ensemble precision matrix and the infinite ensemble, and based on this we give an estimate of the required ensemble size to guarantee the approximation error of the finite ensemble is below a given tolerance. Under mild assumptions, we find that for any given tolerance, the ensemble only needs to grow linearly in the original data dimension. A technical ingredient of our analysis is to upper bound the spectral norm of a matrix-variate T, which we then employ in conjunction with specific results from random matrix theory regarding the estimation of the covariance of random matrices.} }
Endnote
%0 Conference Paper %T On Compressive Ensemble Induced Regularisation: How Close is the Finite Ensemble Precision Matrix to the Infinite Ensemble? %A Ata Kabán %B Proceedings of the 28th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Steve Hanneke %E Lev Reyzin %F pmlr-v76-kabán17a %I PMLR %P 617--628 %U https://proceedings.mlr.press/v76/kab%C3%A1n17a.html %V 76 %X Averaging ensembles of randomly oriented low-dimensional projections of a singular covariance represent a novel and attractive means to obtain a well-conditioned inverse, which only needs access to random projections of the data. However, theoretical analyses so far have only been done at convergence, implying good properties for `large-enough' ensembles. But how large is `large enough'? Here we bound the expected difference in spectral norm between the finite ensemble precision matrix and the infinite ensemble, and based on this we give an estimate of the required ensemble size to guarantee the approximation error of the finite ensemble is below a given tolerance. Under mild assumptions, we find that for any given tolerance, the ensemble only needs to grow linearly in the original data dimension. A technical ingredient of our analysis is to upper bound the spectral norm of a matrix-variate T, which we then employ in conjunction with specific results from random matrix theory regarding the estimation of the covariance of random matrices.
APA
Kabán, A.. (2017). On Compressive Ensemble Induced Regularisation: How Close is the Finite Ensemble Precision Matrix to the Infinite Ensemble?. Proceedings of the 28th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 76:617-628 Available from https://proceedings.mlr.press/v76/kab%C3%A1n17a.html.

Related Material