Topic Modeling via Full Dependence Mixtures

Dan Fisher, Mark Kozdoba, Shie Mannor
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3188-3198, 2020.

Abstract

In this paper we introduce a new approach to topic modelling that scales to large datasets by using a compact representation of the data and by leveraging the GPU architecture. In this approach, topics are learned directly from the co-occurrence data of the corpus. In particular, we introduce a novel mixture model which we term the Full Dependence Mixture (FDM) model. FDMs model second moment under general generative assumptions on the data. While there is previous work on topic modeling using second moments, we develop a direct stochastic optimization procedure for fitting an FDM with a single Kullback Leibler objective. Moment methods in general have the benefit that an iteration no longer needs to scale with the size of the corpus. Our approach allows us to leverage standard optimizers and GPUs for the problem of topic modeling. In particular, we evaluate the approach on two large datasets, NeurIPS papers and a Twitter corpus, with a large number of topics, and show that the approach performs comparably or better than the standard benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-fisher20a, title = {Topic Modeling via Full Dependence Mixtures}, author = {Fisher, Dan and Kozdoba, Mark and Mannor, Shie}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3188--3198}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/fisher20a/fisher20a.pdf}, url = {https://proceedings.mlr.press/v119/fisher20a.html}, abstract = {In this paper we introduce a new approach to topic modelling that scales to large datasets by using a compact representation of the data and by leveraging the GPU architecture. In this approach, topics are learned directly from the co-occurrence data of the corpus. In particular, we introduce a novel mixture model which we term the Full Dependence Mixture (FDM) model. FDMs model second moment under general generative assumptions on the data. While there is previous work on topic modeling using second moments, we develop a direct stochastic optimization procedure for fitting an FDM with a single Kullback Leibler objective. Moment methods in general have the benefit that an iteration no longer needs to scale with the size of the corpus. Our approach allows us to leverage standard optimizers and GPUs for the problem of topic modeling. In particular, we evaluate the approach on two large datasets, NeurIPS papers and a Twitter corpus, with a large number of topics, and show that the approach performs comparably or better than the standard benchmarks.} }
Endnote
%0 Conference Paper %T Topic Modeling via Full Dependence Mixtures %A Dan Fisher %A Mark Kozdoba %A Shie Mannor %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-fisher20a %I PMLR %P 3188--3198 %U https://proceedings.mlr.press/v119/fisher20a.html %V 119 %X In this paper we introduce a new approach to topic modelling that scales to large datasets by using a compact representation of the data and by leveraging the GPU architecture. In this approach, topics are learned directly from the co-occurrence data of the corpus. In particular, we introduce a novel mixture model which we term the Full Dependence Mixture (FDM) model. FDMs model second moment under general generative assumptions on the data. While there is previous work on topic modeling using second moments, we develop a direct stochastic optimization procedure for fitting an FDM with a single Kullback Leibler objective. Moment methods in general have the benefit that an iteration no longer needs to scale with the size of the corpus. Our approach allows us to leverage standard optimizers and GPUs for the problem of topic modeling. In particular, we evaluate the approach on two large datasets, NeurIPS papers and a Twitter corpus, with a large number of topics, and show that the approach performs comparably or better than the standard benchmarks.
APA
Fisher, D., Kozdoba, M. & Mannor, S.. (2020). Topic Modeling via Full Dependence Mixtures. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3188-3198 Available from https://proceedings.mlr.press/v119/fisher20a.html.

Related Material