Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses

Fabien Lauer
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1178-1187, 2020.

Abstract

This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. We show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependence for nonconvex regularization schemes that could not be handled by previous methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-lauer20a, title = {Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses}, author = {Lauer, Fabien}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1178--1187}, year = {2020}, editor = {Silvia Chiappa and Roberto Calandra}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/lauer20a/lauer20a.pdf}, url = { http://proceedings.mlr.press/v108/lauer20a.html }, abstract = {This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. We show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependence for nonconvex regularization schemes that could not be handled by previous methods.} }
Endnote
%0 Conference Paper %T Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses %A Fabien Lauer %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-lauer20a %I PMLR %P 1178--1187 %U http://proceedings.mlr.press/v108/lauer20a.html %V 108 %X This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. We show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependence for nonconvex regularization schemes that could not be handled by previous methods.
APA
Lauer, F.. (2020). Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1178-1187 Available from http://proceedings.mlr.press/v108/lauer20a.html .

Related Material