Learning with Blocks: Composite Likelihood and Contrastive Divergence

Arthur Asuncion, Qiang Liu, Alexander Ihler, Padhraic Smyth
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:33-40, 2010.

Abstract

Composite likelihood methods provide a wide spectrum of computationally efficient techniques for statistical tasks such as parameter estimation and model selection. In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm. In particular, we show that composite likelihoods can be stochastically optimized by performing a variant of contrastive divergence with random-scan blocked Gibbs sampling. By using higher-order composite likelihoods, our proposed learning framework makes it possible to trade off computation time for increased accuracy. Furthermore, one can choose composite likelihood blocks that match the model’s dependence structure, making the optimization of higher-order composite likelihoods computationally efficient. We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we demonstrate that using higher-order blocks improves both the accuracy of parameter estimates and the rate of convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-asuncion10a, title = {Learning with Blocks: Composite Likelihood and Contrastive Divergence}, author = {Asuncion, Arthur and Liu, Qiang and Ihler, Alexander and Smyth, Padhraic}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {33--40}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/asuncion10a/asuncion10a.pdf}, url = {https://proceedings.mlr.press/v9/asuncion10a.html}, abstract = {Composite likelihood methods provide a wide spectrum of computationally efficient techniques for statistical tasks such as parameter estimation and model selection. In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm. In particular, we show that composite likelihoods can be stochastically optimized by performing a variant of contrastive divergence with random-scan blocked Gibbs sampling. By using higher-order composite likelihoods, our proposed learning framework makes it possible to trade off computation time for increased accuracy. Furthermore, one can choose composite likelihood blocks that match the model’s dependence structure, making the optimization of higher-order composite likelihoods computationally efficient. We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we demonstrate that using higher-order blocks improves both the accuracy of parameter estimates and the rate of convergence.} }
Endnote
%0 Conference Paper %T Learning with Blocks: Composite Likelihood and Contrastive Divergence %A Arthur Asuncion %A Qiang Liu %A Alexander Ihler %A Padhraic Smyth %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-asuncion10a %I PMLR %P 33--40 %U https://proceedings.mlr.press/v9/asuncion10a.html %V 9 %X Composite likelihood methods provide a wide spectrum of computationally efficient techniques for statistical tasks such as parameter estimation and model selection. In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm. In particular, we show that composite likelihoods can be stochastically optimized by performing a variant of contrastive divergence with random-scan blocked Gibbs sampling. By using higher-order composite likelihoods, our proposed learning framework makes it possible to trade off computation time for increased accuracy. Furthermore, one can choose composite likelihood blocks that match the model’s dependence structure, making the optimization of higher-order composite likelihoods computationally efficient. We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we demonstrate that using higher-order blocks improves both the accuracy of parameter estimates and the rate of convergence.
RIS
TY - CPAPER TI - Learning with Blocks: Composite Likelihood and Contrastive Divergence AU - Arthur Asuncion AU - Qiang Liu AU - Alexander Ihler AU - Padhraic Smyth BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-asuncion10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 33 EP - 40 L1 - http://proceedings.mlr.press/v9/asuncion10a/asuncion10a.pdf UR - https://proceedings.mlr.press/v9/asuncion10a.html AB - Composite likelihood methods provide a wide spectrum of computationally efficient techniques for statistical tasks such as parameter estimation and model selection. In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm. In particular, we show that composite likelihoods can be stochastically optimized by performing a variant of contrastive divergence with random-scan blocked Gibbs sampling. By using higher-order composite likelihoods, our proposed learning framework makes it possible to trade off computation time for increased accuracy. Furthermore, one can choose composite likelihood blocks that match the model’s dependence structure, making the optimization of higher-order composite likelihoods computationally efficient. We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we demonstrate that using higher-order blocks improves both the accuracy of parameter estimates and the rate of convergence. ER -
APA
Asuncion, A., Liu, Q., Ihler, A. & Smyth, P.. (2010). Learning with Blocks: Composite Likelihood and Contrastive Divergence. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:33-40 Available from https://proceedings.mlr.press/v9/asuncion10a.html.

Related Material