[edit]
An Infinite Hidden Markov Model With Similarity-Biased Transitions
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:942-950, 2017.
Abstract
We describe a generalization of the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) which is able to encode prior information that state transitions are more likely between “nearby” states. This is accomplished by defining a similarity function on the state space and scaling transition probabilities by pairwise similarities, thereby inducing correlations among the transition distributions. We present an augmented data representation of the model as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of success is proportional to the similarity between the source and destination states. This augmentation restores conditional conjugacy and admits a simple Gibbs sampler. We evaluate the model and inference method on a speaker diarization task and a “harmonic parsing” task using four-part chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.