Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization

Louis Filstroff, Alberto Lumbreras, Cédric Févotte
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1506-1514, 2018.

Abstract

We present novel understandings of the Gamma-Poisson (GaP) model, a probabilistic matrix factorization model for count data. We show that GaP can be rewritten free of the score/activation matrix. This gives us new insights about the estimation of the topic/dictionary matrix by maximum marginal likelihood estimation. In particular, this explains the robustness of this estimator to over-specified values of the factorization rank, especially its ability to automatically prune irrelevant dictionary columns, as empirically observed in previous work. The marginalization of the activation matrix leads in turn to a new Monte Carlo Expectation-Maximization algorithm with favorable properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-filstroff18a, title = {Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization}, author = {Filstroff, Louis and Lumbreras, Alberto and F{\'e}votte, C{\'e}dric}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1506--1514}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/filstroff18a/filstroff18a.pdf}, url = {https://proceedings.mlr.press/v80/filstroff18a.html}, abstract = {We present novel understandings of the Gamma-Poisson (GaP) model, a probabilistic matrix factorization model for count data. We show that GaP can be rewritten free of the score/activation matrix. This gives us new insights about the estimation of the topic/dictionary matrix by maximum marginal likelihood estimation. In particular, this explains the robustness of this estimator to over-specified values of the factorization rank, especially its ability to automatically prune irrelevant dictionary columns, as empirically observed in previous work. The marginalization of the activation matrix leads in turn to a new Monte Carlo Expectation-Maximization algorithm with favorable properties.} }
Endnote
%0 Conference Paper %T Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization %A Louis Filstroff %A Alberto Lumbreras %A Cédric Févotte %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-filstroff18a %I PMLR %P 1506--1514 %U https://proceedings.mlr.press/v80/filstroff18a.html %V 80 %X We present novel understandings of the Gamma-Poisson (GaP) model, a probabilistic matrix factorization model for count data. We show that GaP can be rewritten free of the score/activation matrix. This gives us new insights about the estimation of the topic/dictionary matrix by maximum marginal likelihood estimation. In particular, this explains the robustness of this estimator to over-specified values of the factorization rank, especially its ability to automatically prune irrelevant dictionary columns, as empirically observed in previous work. The marginalization of the activation matrix leads in turn to a new Monte Carlo Expectation-Maximization algorithm with favorable properties.
APA
Filstroff, L., Lumbreras, A. & Févotte, C.. (2018). Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1506-1514 Available from https://proceedings.mlr.press/v80/filstroff18a.html.

Related Material