Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets


S. Sukhbaatar, T. Makino, K. Aihara, T. Chikayama ;
Proceedings of the Asian Conference on Machine Learning, PMLR 20:231-246, 2011.


We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here.

Related Material