Avneesh Saluja,
Priya Krishnan Sundararajan,
Ole J Mengshoel
;
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:984-992, 2012.
Abstract
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.
@InProceedings{pmlr-v22-saluja12,
title = {Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks},
author = {Avneesh Saluja and Priya Krishnan Sundararajan and Ole J Mengshoel},
booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics},
pages = {984--992},
year = {2012},
editor = {Neil D. Lawrence and Mark Girolami},
volume = {22},
series = {Proceedings of Machine Learning Research},
address = {La Palma, Canary Islands},
month = {21--23 Apr},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v22/saluja12/saluja12.pdf},
url = {http://proceedings.mlr.press/v22/saluja12.html},
abstract = {The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.}
}
%0 Conference Paper
%T Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks
%A Avneesh Saluja
%A Priya Krishnan Sundararajan
%A Ole J Mengshoel
%B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics
%C Proceedings of Machine Learning Research
%D 2012
%E Neil D. Lawrence
%E Mark Girolami
%F pmlr-v22-saluja12
%I PMLR
%J Proceedings of Machine Learning Research
%P 984--992
%U http://proceedings.mlr.press
%V 22
%W PMLR
%X The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.
TY - CPAPER
TI - Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks
AU - Avneesh Saluja
AU - Priya Krishnan Sundararajan
AU - Ole J Mengshoel
BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics
PY - 2012/03/21
DA - 2012/03/21
ED - Neil D. Lawrence
ED - Mark Girolami
ID - pmlr-v22-saluja12
PB - PMLR
SP - 984
DP - PMLR
EP - 992
L1 - http://proceedings.mlr.press/v22/saluja12/saluja12.pdf
UR - http://proceedings.mlr.press/v22/saluja12.html
AB - The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitations, a significant one being variation in eventual solutions found, due to convergence to local optima. Several techniques have been proposed to allay this problem, for example initializing EM from multiple random starting points and selecting the highest likelihood out of all runs. In this work, we a) show that this method can be very expensive computationally for difficult Bayesian networks, and b) in response we propose an age-layered EM approach (ALEM) that efficiently discards less promising runs well before convergence. Our experiments show a significant reduction in the number of iterations, typically two- to four-fold, with minimal or no reduction in solution quality, indicating the potential for ALEM to streamline parameter estimation in Bayesian networks.
ER -
Saluja, A., Sundararajan, P.K. & Mengshoel, O.J.. (2012). Age-Layered Expectation Maximization for Parameter Learning in Bayesian Networks. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in PMLR 22:984-992
This site last compiled Wed, 03 Jan 2018 13:53:01 +0000