Gibbs Max-Margin Topic Models with Fast Sampling Algorithms

Jun Zhu, Ning Chen, Hugh Perkins, Bo Zhang
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):124-132, 2013.

Abstract

Existing max-margin supervised topic models rely on an iterative procedure to solve multiple latent SVM subproblems with additional mean-field assumptions on the desired posterior distributions. This paper presents Gibbs max-margin supervised topic models by minimizing an expected margin loss, an upper bound of the existing margin loss derived from an expected prediction rule. By introducing augmented variables, we develop simple and fast Gibbs sampling algorithms with no restricting assumptions and no need to solve SVM subproblems for both classification and regression. Empirical results demonstrate significant improvements on time efficiency. The classification performance is also significantly improved over competitors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-zhu13, title = {{G}ibbs Max-Margin Topic Models with Fast Sampling Algorithms}, author = {Zhu, Jun and Chen, Ning and Perkins, Hugh and Zhang, Bo}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {124--132}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/zhu13.pdf}, url = {https://proceedings.mlr.press/v28/zhu13.html}, abstract = {Existing max-margin supervised topic models rely on an iterative procedure to solve multiple latent SVM subproblems with additional mean-field assumptions on the desired posterior distributions. This paper presents Gibbs max-margin supervised topic models by minimizing an expected margin loss, an upper bound of the existing margin loss derived from an expected prediction rule. By introducing augmented variables, we develop simple and fast Gibbs sampling algorithms with no restricting assumptions and no need to solve SVM subproblems for both classification and regression. Empirical results demonstrate significant improvements on time efficiency. The classification performance is also significantly improved over competitors.} }
Endnote
%0 Conference Paper %T Gibbs Max-Margin Topic Models with Fast Sampling Algorithms %A Jun Zhu %A Ning Chen %A Hugh Perkins %A Bo Zhang %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-zhu13 %I PMLR %P 124--132 %U https://proceedings.mlr.press/v28/zhu13.html %V 28 %N 1 %X Existing max-margin supervised topic models rely on an iterative procedure to solve multiple latent SVM subproblems with additional mean-field assumptions on the desired posterior distributions. This paper presents Gibbs max-margin supervised topic models by minimizing an expected margin loss, an upper bound of the existing margin loss derived from an expected prediction rule. By introducing augmented variables, we develop simple and fast Gibbs sampling algorithms with no restricting assumptions and no need to solve SVM subproblems for both classification and regression. Empirical results demonstrate significant improvements on time efficiency. The classification performance is also significantly improved over competitors.
RIS
TY - CPAPER TI - Gibbs Max-Margin Topic Models with Fast Sampling Algorithms AU - Jun Zhu AU - Ning Chen AU - Hugh Perkins AU - Bo Zhang BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-zhu13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 124 EP - 132 L1 - http://proceedings.mlr.press/v28/zhu13.pdf UR - https://proceedings.mlr.press/v28/zhu13.html AB - Existing max-margin supervised topic models rely on an iterative procedure to solve multiple latent SVM subproblems with additional mean-field assumptions on the desired posterior distributions. This paper presents Gibbs max-margin supervised topic models by minimizing an expected margin loss, an upper bound of the existing margin loss derived from an expected prediction rule. By introducing augmented variables, we develop simple and fast Gibbs sampling algorithms with no restricting assumptions and no need to solve SVM subproblems for both classification and regression. Empirical results demonstrate significant improvements on time efficiency. The classification performance is also significantly improved over competitors. ER -
APA
Zhu, J., Chen, N., Perkins, H. & Zhang, B.. (2013). Gibbs Max-Margin Topic Models with Fast Sampling Algorithms. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):124-132 Available from https://proceedings.mlr.press/v28/zhu13.html.

Related Material