Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks

Avneesh Saluja, Priya Krishnan Sundararajan, Ole J Mengshoel
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:984-992, 2012.

Abstract

The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-saluja12, title = {Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks}, author = {Saluja, Avneesh and Sundararajan, Priya Krishnan and Mengshoel, Ole J}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {984--992}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/saluja12/saluja12.pdf}, url = {https://proceedings.mlr.press/v22/saluja12.html}, abstract = {The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.} }
Endnote
%0 Conference Paper %T Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks %A Avneesh Saluja %A Priya Krishnan Sundararajan %A Ole J Mengshoel %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-saluja12 %I PMLR %P 984--992 %U https://proceedings.mlr.press/v22/saluja12.html %V 22 %X The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.
RIS
TY - CPAPER TI - Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks AU - Avneesh Saluja AU - Priya Krishnan Sundararajan AU - Ole J Mengshoel BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-saluja12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 984 EP - 992 L1 - http://proceedings.mlr.press/v22/saluja12/saluja12.pdf UR - https://proceedings.mlr.press/v22/saluja12.html AB - The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks. ER -
APA
Saluja, A., Sundararajan, P.K. & Mengshoel, O.J.. (2012). Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:984-992 Available from https://proceedings.mlr.press/v22/saluja12.html.

Related Material