An Infinite Hidden Markov Model With Similarity-Biased Transitions

Colin Reimer Dawson, Chaofan Huang, Clayton T. Morrison
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:942-950, 2017.

Abstract

We describe a generalization of the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) which is able to encode prior information that state transitions are more likely between “nearby” states. This is accomplished by defining a similarity function on the state space and scaling transition probabilities by pairwise similarities, thereby inducing correlations among the transition distributions. We present an augmented data representation of the model as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of success is proportional to the similarity between the source and destination states. This augmentation restores conditional conjugacy and admits a simple Gibbs sampler. We evaluate the model and inference method on a speaker diarization task and a “harmonic parsing” task using four-part chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-dawson17a, title = {An Infinite Hidden {M}arkov Model With Similarity-Biased Transitions}, author = {Colin Reimer Dawson and Chaofan Huang and Clayton T. Morrison}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {942--950}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/dawson17a/dawson17a.pdf}, url = {https://proceedings.mlr.press/v70/dawson17a.html}, abstract = {We describe a generalization of the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) which is able to encode prior information that state transitions are more likely between “nearby” states. This is accomplished by defining a similarity function on the state space and scaling transition probabilities by pairwise similarities, thereby inducing correlations among the transition distributions. We present an augmented data representation of the model as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of success is proportional to the similarity between the source and destination states. This augmentation restores conditional conjugacy and admits a simple Gibbs sampler. We evaluate the model and inference method on a speaker diarization task and a “harmonic parsing” task using four-part chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.} }
Endnote
%0 Conference Paper %T An Infinite Hidden Markov Model With Similarity-Biased Transitions %A Colin Reimer Dawson %A Chaofan Huang %A Clayton T. Morrison %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-dawson17a %I PMLR %P 942--950 %U https://proceedings.mlr.press/v70/dawson17a.html %V 70 %X We describe a generalization of the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) which is able to encode prior information that state transitions are more likely between “nearby” states. This is accomplished by defining a similarity function on the state space and scaling transition probabilities by pairwise similarities, thereby inducing correlations among the transition distributions. We present an augmented data representation of the model as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of success is proportional to the similarity between the source and destination states. This augmentation restores conditional conjugacy and admits a simple Gibbs sampler. We evaluate the model and inference method on a speaker diarization task and a “harmonic parsing” task using four-part chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.
APA
Dawson, C.R., Huang, C. & Morrison, C.T.. (2017). An Infinite Hidden Markov Model With Similarity-Biased Transitions. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:942-950 Available from https://proceedings.mlr.press/v70/dawson17a.html.

Related Material