Expectation Maximization of Forward Decoding Kernel Machines

Shantanu Chakrabartty, Gert Cauwenberghs
Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, PMLR R4:65-71, 2003.

Abstract

Forward Decoding Kernel Machines (FDKM) combine large-margin kernel classifiers with Hidden Markov Models (HMM) for Maximum a Posteriori (MAP) adaptive sequence estimation. This paper proposes a variant on FDKM training using ExpectationMaximization (EM). Parameterization of the expectation step controls the temporal extent of the context used in correcting noisy and missing labels in the training sequence. Experiments with EM-FDKM on TIMIT phone sequence data demonstrate up to $10 %$ improvement in classification performance over FDKM trained with hard transitions between labels.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR4-chakrabartty03a, title = {Expectation Maximization of Forward Decoding Kernel Machines}, author = {Chakrabartty, Shantanu and Cauwenberghs, Gert}, booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics}, pages = {65--71}, year = {2003}, editor = {Bishop, Christopher M. and Frey, Brendan J.}, volume = {R4}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r4/chakrabartty03a/chakrabartty03a.pdf}, url = {https://proceedings.mlr.press/r4/chakrabartty03a.html}, abstract = {Forward Decoding Kernel Machines (FDKM) combine large-margin kernel classifiers with Hidden Markov Models (HMM) for Maximum a Posteriori (MAP) adaptive sequence estimation. This paper proposes a variant on FDKM training using ExpectationMaximization (EM). Parameterization of the expectation step controls the temporal extent of the context used in correcting noisy and missing labels in the training sequence. Experiments with EM-FDKM on TIMIT phone sequence data demonstrate up to $10 %$ improvement in classification performance over FDKM trained with hard transitions between labels.}, note = {Reissued by PMLR on 01 April 2021.} }
Endnote
%0 Conference Paper %T Expectation Maximization of Forward Decoding Kernel Machines %A Shantanu Chakrabartty %A Gert Cauwenberghs %B Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2003 %E Christopher M. Bishop %E Brendan J. Frey %F pmlr-vR4-chakrabartty03a %I PMLR %P 65--71 %U https://proceedings.mlr.press/r4/chakrabartty03a.html %V R4 %X Forward Decoding Kernel Machines (FDKM) combine large-margin kernel classifiers with Hidden Markov Models (HMM) for Maximum a Posteriori (MAP) adaptive sequence estimation. This paper proposes a variant on FDKM training using ExpectationMaximization (EM). Parameterization of the expectation step controls the temporal extent of the context used in correcting noisy and missing labels in the training sequence. Experiments with EM-FDKM on TIMIT phone sequence data demonstrate up to $10 %$ improvement in classification performance over FDKM trained with hard transitions between labels. %Z Reissued by PMLR on 01 April 2021.
APA
Chakrabartty, S. & Cauwenberghs, G.. (2003). Expectation Maximization of Forward Decoding Kernel Machines. Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R4:65-71 Available from https://proceedings.mlr.press/r4/chakrabartty03a.html. Reissued by PMLR on 01 April 2021.

Related Material