Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs

Stephen Bach, Bert Huang, Jordan Boyd-Graber, Lise Getoor
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:381-390, 2015.

Abstract

Latent variables allow probabilistic graphical models to capture nuance and structure in important domains such as network science, natural language processing, and computer vision. Naive approaches to learning such complex models can be prohibitively expensive—because they require repeated inferences to update beliefs about latent variables—so lifting this restriction for useful classes of models is an important problem. Hinge-loss Markov random fields (HL-MRFs) are graphical models that allow highly scalable inference and learning in structured domains, in part by representing structured problems with continuous variables. However, this representation leads to challenges when learning with latent variables. We introduce paired-dual learning, a framework that greatly speeds up training by using tractable entropy surrogates and avoiding repeated inferences. Paired-dual learning optimizes an objective with a pair of dual inference problems. This allows fast, joint optimization of parameters and dual variables. We evaluate on social-group detection, trust prediction in social networks, and image reconstruction, finding that paired-dual learning trains models as accurate as those trained by traditional methods in much less time, often before traditional methods make even a single parameter update.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-bach15, title = {Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs}, author = {Bach, Stephen and Huang, Bert and Boyd-Graber, Jordan and Getoor, Lise}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {381--390}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/bach15.pdf}, url = {https://proceedings.mlr.press/v37/bach15.html}, abstract = {Latent variables allow probabilistic graphical models to capture nuance and structure in important domains such as network science, natural language processing, and computer vision. Naive approaches to learning such complex models can be prohibitively expensive—because they require repeated inferences to update beliefs about latent variables—so lifting this restriction for useful classes of models is an important problem. Hinge-loss Markov random fields (HL-MRFs) are graphical models that allow highly scalable inference and learning in structured domains, in part by representing structured problems with continuous variables. However, this representation leads to challenges when learning with latent variables. We introduce paired-dual learning, a framework that greatly speeds up training by using tractable entropy surrogates and avoiding repeated inferences. Paired-dual learning optimizes an objective with a pair of dual inference problems. This allows fast, joint optimization of parameters and dual variables. We evaluate on social-group detection, trust prediction in social networks, and image reconstruction, finding that paired-dual learning trains models as accurate as those trained by traditional methods in much less time, often before traditional methods make even a single parameter update.} }
Endnote
%0 Conference Paper %T Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs %A Stephen Bach %A Bert Huang %A Jordan Boyd-Graber %A Lise Getoor %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-bach15 %I PMLR %P 381--390 %U https://proceedings.mlr.press/v37/bach15.html %V 37 %X Latent variables allow probabilistic graphical models to capture nuance and structure in important domains such as network science, natural language processing, and computer vision. Naive approaches to learning such complex models can be prohibitively expensive—because they require repeated inferences to update beliefs about latent variables—so lifting this restriction for useful classes of models is an important problem. Hinge-loss Markov random fields (HL-MRFs) are graphical models that allow highly scalable inference and learning in structured domains, in part by representing structured problems with continuous variables. However, this representation leads to challenges when learning with latent variables. We introduce paired-dual learning, a framework that greatly speeds up training by using tractable entropy surrogates and avoiding repeated inferences. Paired-dual learning optimizes an objective with a pair of dual inference problems. This allows fast, joint optimization of parameters and dual variables. We evaluate on social-group detection, trust prediction in social networks, and image reconstruction, finding that paired-dual learning trains models as accurate as those trained by traditional methods in much less time, often before traditional methods make even a single parameter update.
RIS
TY - CPAPER TI - Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs AU - Stephen Bach AU - Bert Huang AU - Jordan Boyd-Graber AU - Lise Getoor BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-bach15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 381 EP - 390 L1 - http://proceedings.mlr.press/v37/bach15.pdf UR - https://proceedings.mlr.press/v37/bach15.html AB - Latent variables allow probabilistic graphical models to capture nuance and structure in important domains such as network science, natural language processing, and computer vision. Naive approaches to learning such complex models can be prohibitively expensive—because they require repeated inferences to update beliefs about latent variables—so lifting this restriction for useful classes of models is an important problem. Hinge-loss Markov random fields (HL-MRFs) are graphical models that allow highly scalable inference and learning in structured domains, in part by representing structured problems with continuous variables. However, this representation leads to challenges when learning with latent variables. We introduce paired-dual learning, a framework that greatly speeds up training by using tractable entropy surrogates and avoiding repeated inferences. Paired-dual learning optimizes an objective with a pair of dual inference problems. This allows fast, joint optimization of parameters and dual variables. We evaluate on social-group detection, trust prediction in social networks, and image reconstruction, finding that paired-dual learning trains models as accurate as those trained by traditional methods in much less time, often before traditional methods make even a single parameter update. ER -
APA
Bach, S., Huang, B., Boyd-Graber, J. & Getoor, L.. (2015). Paired-Dual Learning for Fast Training of Latent Variable Hinge-Loss MRFs. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:381-390 Available from https://proceedings.mlr.press/v37/bach15.html.

Related Material