Neural Topic Model with Attention for Supervised Learning

Xinyi Wang, YI YANG
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1147-1156, 2020.

Abstract

Topic modeling utilizing neural variational inference has shown promising results recently. Unlike traditional Bayesian topic models, neural topic models use deep neural network to approximate the intractable marginal distribution and thus gain strong generalisation ability. However, neural topic models are unsupervised model. Directly using the document-specific topic proportions in downstream prediction tasks could lead to sub-optimal performance. This paper presents Topic Attention Model (TAM), a supervised neural topic model that integrates an attention recurrent neural network (RNN) model. We design a novel way to utilize document-specific topic proportions and global topic vectors learned from neural topic model in the attention mechanism. We also develop backpropagation inference method that allows for joint model optimisation. Experimental results on three public datasets show that TAM not only significantly improves supervised learning tasks, including classification and regression, but also achieves lower perplexity for the document modeling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-wang20c, title = {Neural Topic Model with Attention for Supervised Learning}, author = {Wang, Xinyi and YANG, YI}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1147--1156}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/wang20c/wang20c.pdf}, url = {https://proceedings.mlr.press/v108/wang20c.html}, abstract = {Topic modeling utilizing neural variational inference has shown promising results recently. Unlike traditional Bayesian topic models, neural topic models use deep neural network to approximate the intractable marginal distribution and thus gain strong generalisation ability. However, neural topic models are unsupervised model. Directly using the document-specific topic proportions in downstream prediction tasks could lead to sub-optimal performance. This paper presents Topic Attention Model (TAM), a supervised neural topic model that integrates an attention recurrent neural network (RNN) model. We design a novel way to utilize document-specific topic proportions and global topic vectors learned from neural topic model in the attention mechanism. We also develop backpropagation inference method that allows for joint model optimisation. Experimental results on three public datasets show that TAM not only significantly improves supervised learning tasks, including classification and regression, but also achieves lower perplexity for the document modeling.} }
Endnote
%0 Conference Paper %T Neural Topic Model with Attention for Supervised Learning %A Xinyi Wang %A YI YANG %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-wang20c %I PMLR %P 1147--1156 %U https://proceedings.mlr.press/v108/wang20c.html %V 108 %X Topic modeling utilizing neural variational inference has shown promising results recently. Unlike traditional Bayesian topic models, neural topic models use deep neural network to approximate the intractable marginal distribution and thus gain strong generalisation ability. However, neural topic models are unsupervised model. Directly using the document-specific topic proportions in downstream prediction tasks could lead to sub-optimal performance. This paper presents Topic Attention Model (TAM), a supervised neural topic model that integrates an attention recurrent neural network (RNN) model. We design a novel way to utilize document-specific topic proportions and global topic vectors learned from neural topic model in the attention mechanism. We also develop backpropagation inference method that allows for joint model optimisation. Experimental results on three public datasets show that TAM not only significantly improves supervised learning tasks, including classification and regression, but also achieves lower perplexity for the document modeling.
APA
Wang, X. & YANG, Y.. (2020). Neural Topic Model with Attention for Supervised Learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1147-1156 Available from https://proceedings.mlr.press/v108/wang20c.html.

Related Material