Reduced-Rank Hidden Markov Models

Sajid Siddiqi, Byron Boots, Geoffrey Gordon
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:741-748, 2010.

Abstract

Hsu et al. (2009) recently proposed an efficient, accurate spectral learning algorithm for Hidden Markov Models (HMMs). In this paper we relax their assumptions and prove a tighter finite-sample error bound for the case of Reduced-Rank HMMs, i.e., HMMs with low-rank transition matrices. Since rank-$k$ RR-HMMs are a larger class of models than $k$-state HMMs while being equally efficient to work with, this relaxation greatly increases the learning algorithm’s scope. In addition, we generalize the algorithm and bounds to models where multiple observations are needed to disambiguate state, and to models that emit multivariate real-valued observations. Finally we prove consistency for learning Predictive State Representations, an even larger class of models. Experiments on synthetic data and a toy video, as well as on difficult robot vision data, yield accurate models that compare favorably with alternatives in simulation quality and prediction accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v9-siddiqi10a, title = {Reduced-Rank Hidden Markov Models}, author = {Siddiqi, Sajid and Boots, Byron and Gordon, Geoffrey}, booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics}, pages = {741--748}, year = {2010}, editor = {Teh, Yee Whye and Titterington, Mike}, volume = {9}, series = {Proceedings of Machine Learning Research}, address = {Chia Laguna Resort, Sardinia, Italy}, month = {13--15 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v9/siddiqi10a/siddiqi10a.pdf}, url = {https://proceedings.mlr.press/v9/siddiqi10a.html}, abstract = {Hsu et al. (2009) recently proposed an efficient, accurate spectral learning algorithm for Hidden Markov Models (HMMs). In this paper we relax their assumptions and prove a tighter finite-sample error bound for the case of Reduced-Rank HMMs, i.e., HMMs with low-rank transition matrices. Since rank-$k$ RR-HMMs are a larger class of models than $k$-state HMMs while being equally efficient to work with, this relaxation greatly increases the learning algorithm’s scope. In addition, we generalize the algorithm and bounds to models where multiple observations are needed to disambiguate state, and to models that emit multivariate real-valued observations. Finally we prove consistency for learning Predictive State Representations, an even larger class of models. Experiments on synthetic data and a toy video, as well as on difficult robot vision data, yield accurate models that compare favorably with alternatives in simulation quality and prediction accuracy.} }
Endnote
%0 Conference Paper %T Reduced-Rank Hidden Markov Models %A Sajid Siddiqi %A Byron Boots %A Geoffrey Gordon %B Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2010 %E Yee Whye Teh %E Mike Titterington %F pmlr-v9-siddiqi10a %I PMLR %P 741--748 %U https://proceedings.mlr.press/v9/siddiqi10a.html %V 9 %X Hsu et al. (2009) recently proposed an efficient, accurate spectral learning algorithm for Hidden Markov Models (HMMs). In this paper we relax their assumptions and prove a tighter finite-sample error bound for the case of Reduced-Rank HMMs, i.e., HMMs with low-rank transition matrices. Since rank-$k$ RR-HMMs are a larger class of models than $k$-state HMMs while being equally efficient to work with, this relaxation greatly increases the learning algorithm’s scope. In addition, we generalize the algorithm and bounds to models where multiple observations are needed to disambiguate state, and to models that emit multivariate real-valued observations. Finally we prove consistency for learning Predictive State Representations, an even larger class of models. Experiments on synthetic data and a toy video, as well as on difficult robot vision data, yield accurate models that compare favorably with alternatives in simulation quality and prediction accuracy.
RIS
TY - CPAPER TI - Reduced-Rank Hidden Markov Models AU - Sajid Siddiqi AU - Byron Boots AU - Geoffrey Gordon BT - Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics DA - 2010/03/31 ED - Yee Whye Teh ED - Mike Titterington ID - pmlr-v9-siddiqi10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 9 SP - 741 EP - 748 L1 - http://proceedings.mlr.press/v9/siddiqi10a/siddiqi10a.pdf UR - https://proceedings.mlr.press/v9/siddiqi10a.html AB - Hsu et al. (2009) recently proposed an efficient, accurate spectral learning algorithm for Hidden Markov Models (HMMs). In this paper we relax their assumptions and prove a tighter finite-sample error bound for the case of Reduced-Rank HMMs, i.e., HMMs with low-rank transition matrices. Since rank-$k$ RR-HMMs are a larger class of models than $k$-state HMMs while being equally efficient to work with, this relaxation greatly increases the learning algorithm’s scope. In addition, we generalize the algorithm and bounds to models where multiple observations are needed to disambiguate state, and to models that emit multivariate real-valued observations. Finally we prove consistency for learning Predictive State Representations, an even larger class of models. Experiments on synthetic data and a toy video, as well as on difficult robot vision data, yield accurate models that compare favorably with alternatives in simulation quality and prediction accuracy. ER -
APA
Siddiqi, S., Boots, B. & Gordon, G.. (2010). Reduced-Rank Hidden Markov Models. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 9:741-748 Available from https://proceedings.mlr.press/v9/siddiqi10a.html.

Related Material