A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization

Priya Krishnan Sundararajan, Ole J. Mengshoel
; Proceedings of the Eighth International Conference on Probabilistic Graphical Models, PMLR 52:511-522, 2016.

Abstract

Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v52-sundararajan16, title = {A Genetic Algorithm for Learning Parameters in {B}ayesian Networks using Expectation Maximization}, author = {Priya Krishnan Sundararajan and Ole J. Mengshoel}, booktitle = {Proceedings of the Eighth International Conference on Probabilistic Graphical Models}, pages = {511--522}, year = {2016}, editor = {Alessandro Antonucci and Giorgio Corani and Cassio Polpo Campos}}, volume = {52}, series = {Proceedings of Machine Learning Research}, address = {Lugano, Switzerland}, month = {06--09 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v52/sundararajan16.pdf}, url = {http://proceedings.mlr.press/v52/sundararajan16.html}, abstract = {Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.} }
Endnote
%0 Conference Paper %T A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization %A Priya Krishnan Sundararajan %A Ole J. Mengshoel %B Proceedings of the Eighth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2016 %E Alessandro Antonucci %E Giorgio Corani %E Cassio Polpo Campos} %F pmlr-v52-sundararajan16 %I PMLR %J Proceedings of Machine Learning Research %P 511--522 %U http://proceedings.mlr.press %V 52 %W PMLR %X Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.
RIS
TY - CPAPER TI - A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization AU - Priya Krishnan Sundararajan AU - Ole J. Mengshoel BT - Proceedings of the Eighth International Conference on Probabilistic Graphical Models PY - 2016/08/15 DA - 2016/08/15 ED - Alessandro Antonucci ED - Giorgio Corani ED - Cassio Polpo Campos} ID - pmlr-v52-sundararajan16 PB - PMLR SP - 511 DP - PMLR EP - 522 L1 - http://proceedings.mlr.press/v52/sundararajan16.pdf UR - http://proceedings.mlr.press/v52/sundararajan16.html AB - Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm. ER -
APA
Sundararajan, P.K. & Mengshoel, O.J.. (2016). A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization. Proceedings of the Eighth International Conference on Probabilistic Graphical Models, in PMLR 52:511-522

Related Material