A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization

Priya Krishnan Sundararajan, Ole J. Mengshoel
Proceedings of the Eighth International Conference on Probabilistic Graphical Models, PMLR 52:511-522, 2016.

Abstract

Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v52-sundararajan16, title = {A Genetic Algorithm for Learning Parameters in {B}ayesian Networks using Expectation Maximization}, author = {Sundararajan, Priya Krishnan and Mengshoel, Ole J.}, booktitle = {Proceedings of the Eighth International Conference on Probabilistic Graphical Models}, pages = {511--522}, year = {2016}, editor = {Antonucci, Alessandro and Corani, Giorgio and Campos}, Cassio Polpo}, volume = {52}, series = {Proceedings of Machine Learning Research}, address = {Lugano, Switzerland}, month = {06--09 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v52/sundararajan16.pdf}, url = {https://proceedings.mlr.press/v52/sundararajan16.html}, abstract = {Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.} }
Endnote
%0 Conference Paper %T A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization %A Priya Krishnan Sundararajan %A Ole J. Mengshoel %B Proceedings of the Eighth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2016 %E Alessandro Antonucci %E Giorgio Corani %E Cassio Polpo Campos} %F pmlr-v52-sundararajan16 %I PMLR %P 511--522 %U https://proceedings.mlr.press/v52/sundararajan16.html %V 52 %X Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm.
RIS
TY - CPAPER TI - A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization AU - Priya Krishnan Sundararajan AU - Ole J. Mengshoel BT - Proceedings of the Eighth International Conference on Probabilistic Graphical Models DA - 2016/08/15 ED - Alessandro Antonucci ED - Giorgio Corani ED - Cassio Polpo Campos} ID - pmlr-v52-sundararajan16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 52 SP - 511 EP - 522 L1 - http://proceedings.mlr.press/v52/sundararajan16.pdf UR - https://proceedings.mlr.press/v52/sundararajan16.html AB - Expectation maximization (EM) is a popular algorithm for parameter estimation in situations with incomplete data. The EM algorithm has, despite its popularity, the disadvantage of often converging to local but non-global optima. Several techniques have been proposed to address this problem, for example initializing EM from multiple random starting points and then selecting the run with the highest likelihood. Unfortunately, this method is computationally expensive. In this paper, our goal is to reduce computational cost while at the same time maximizing likelihood. We propose a Genetic Algorithm for Expectation Maximization (GAEM) for learning parameters in Bayesian networks. GAEM combines the global search property of a genetic algorithm with the local search property of EM. We prove GAEM’s global convergence theoretically. Experimentally, we show that GAEM provides significant speed-ups since it tends to select more fit individuals, which converge faster, as parents for the next generation. Specifically, GAEM converges 1.5 to 7 times faster while producing better log-likelihood scores than the traditional EM algorithm. ER -
APA
Sundararajan, P.K. & Mengshoel, O.J.. (2016). A Genetic Algorithm for Learning Parameters in Bayesian Networks using Expectation Maximization. Proceedings of the Eighth International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 52:511-522 Available from https://proceedings.mlr.press/v52/sundararajan16.html.

Related Material