[edit]
The Generalization Error of Dictionary Learning with Moreau Envelopes
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1617-1625, 2018.
Abstract
This is a theoretical study on the sample complexity of dictionary learning with a general type of reconstruction loss. The goal is to estimate a m×d matrix D of unit-norm columns when the only available information is a set of training samples. Points x in Rm are subsequently approximated by the linear combination Da after solving the problem min; function g:\mathbb{R}^d \to [0,+\infty) is either an indicator function or a sparsity promoting regularizer. Here is considered the case where \Phi(x) = \inf_{z \in \mathbb{R}^m} { ||x-z||_2^2 + h(||z||_2)} and h is an even and univariate function on the real line. Connections are drawn between \Phi and the Moreau envelope of h. A new sample complexity result concerning the k-sparse dictionary problem removes the spurious condition on the coherence of D appearing in previous works. Finally, comments are made on the approximation error of certain families of losses. The derived generalization bounds are of order \mathcal{O}(\sqrt{\log n /n}) and valid without any further restrictions on the set of dictionaries with unit-norm columns.