Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

Pritish Kamath, Omar Montasser, Nathan Srebro
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2236-2262, 2020.

Abstract

We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to {\em approximate}, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-kamath20b, title = {{Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity}}, author = {Kamath, Pritish and Montasser, Omar and Srebro, Nathan}, pages = {2236--2262}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/kamath20b/kamath20b.pdf}, url = {http://proceedings.mlr.press/v125/kamath20b.html}, abstract = { We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to {\em approximate}, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.} }
Endnote
%0 Conference Paper %T Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity %A Pritish Kamath %A Omar Montasser %A Nathan Srebro %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-kamath20b %I PMLR %J Proceedings of Machine Learning Research %P 2236--2262 %U http://proceedings.mlr.press %V 125 %W PMLR %X We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to {\em approximate}, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.
APA
Kamath, P., Montasser, O. & Srebro, N.. (2020). Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:2236-2262

Related Material