Dictionary Learning Based on Sparse Distribution Tomography

Pedram Pad, Farnood Salehi, Elisa Celis, Patrick Thiran, Michael Unser
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2731-2740, 2017.

Abstract

We propose a new statistical dictionary learning algorithm for sparse signals that is based on an $\alpha$-stable innovation model. The parameters of the underlying model—that is, the atoms of the dictionary, the sparsity index $\alpha$ and the dispersion of the transform-domain coefficients—are recovered using a new type of probability distribution tomography. Specifically, we drive our estimator with a series of random projections of the data, which results in an efficient algorithm. Moreover, since the projections are achieved using linear combinations, we can invoke the generalized central limit theorem to justify the use of our method for sparse signals that are not necessarily $\alpha$-stable. We evaluate our algorithm by performing two types of experiments: image in-painting and image denoising. In both cases, we find that our approach is competitive with state-of-the-art dictionary learning techniques. Beyond the algorithm itself, two aspects of this study are interesting in their own right. The first is our statistical formulation of the problem, which unifies the topics of dictionary learning and independent component analysis. The second is a generalization of a classical theorem about isometries of $\ell_p$-norms that constitutes the foundation of our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-pad17a, title = {Dictionary Learning Based on Sparse Distribution Tomography}, author = {Pedram Pad and Farnood Salehi and Elisa Celis and Patrick Thiran and Michael Unser}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2731--2740}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/pad17a/pad17a.pdf}, url = {https://proceedings.mlr.press/v70/pad17a.html}, abstract = {We propose a new statistical dictionary learning algorithm for sparse signals that is based on an $\alpha$-stable innovation model. The parameters of the underlying model—that is, the atoms of the dictionary, the sparsity index $\alpha$ and the dispersion of the transform-domain coefficients—are recovered using a new type of probability distribution tomography. Specifically, we drive our estimator with a series of random projections of the data, which results in an efficient algorithm. Moreover, since the projections are achieved using linear combinations, we can invoke the generalized central limit theorem to justify the use of our method for sparse signals that are not necessarily $\alpha$-stable. We evaluate our algorithm by performing two types of experiments: image in-painting and image denoising. In both cases, we find that our approach is competitive with state-of-the-art dictionary learning techniques. Beyond the algorithm itself, two aspects of this study are interesting in their own right. The first is our statistical formulation of the problem, which unifies the topics of dictionary learning and independent component analysis. The second is a generalization of a classical theorem about isometries of $\ell_p$-norms that constitutes the foundation of our approach.} }
Endnote
%0 Conference Paper %T Dictionary Learning Based on Sparse Distribution Tomography %A Pedram Pad %A Farnood Salehi %A Elisa Celis %A Patrick Thiran %A Michael Unser %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-pad17a %I PMLR %P 2731--2740 %U https://proceedings.mlr.press/v70/pad17a.html %V 70 %X We propose a new statistical dictionary learning algorithm for sparse signals that is based on an $\alpha$-stable innovation model. The parameters of the underlying model—that is, the atoms of the dictionary, the sparsity index $\alpha$ and the dispersion of the transform-domain coefficients—are recovered using a new type of probability distribution tomography. Specifically, we drive our estimator with a series of random projections of the data, which results in an efficient algorithm. Moreover, since the projections are achieved using linear combinations, we can invoke the generalized central limit theorem to justify the use of our method for sparse signals that are not necessarily $\alpha$-stable. We evaluate our algorithm by performing two types of experiments: image in-painting and image denoising. In both cases, we find that our approach is competitive with state-of-the-art dictionary learning techniques. Beyond the algorithm itself, two aspects of this study are interesting in their own right. The first is our statistical formulation of the problem, which unifies the topics of dictionary learning and independent component analysis. The second is a generalization of a classical theorem about isometries of $\ell_p$-norms that constitutes the foundation of our approach.
APA
Pad, P., Salehi, F., Celis, E., Thiran, P. & Unser, M.. (2017). Dictionary Learning Based on Sparse Distribution Tomography. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2731-2740 Available from https://proceedings.mlr.press/v70/pad17a.html.

Related Material