[edit]
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2236-2262, 2020.
Abstract
We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to {\em approximate}, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.