Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets

Sainbaya Sukhbaatar, Takaki Makino, Kazuyuki Aihara, Takashi Chikayama
Proceedings of the Asian Conference on Machine Learning, PMLR 20:231-246, 2011.

Abstract

We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here.

Cite this Paper


BibTeX
@InProceedings{pmlr-v20-sukhbaatar11, title = {Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets}, author = {Sukhbaatar, Sainbaya and Makino, Takaki and Aihara, Kazuyuki and Chikayama, Takashi}, booktitle = {Proceedings of the Asian Conference on Machine Learning}, pages = {231--246}, year = {2011}, editor = {Hsu, Chun-Nan and Lee, Wee Sun}, volume = {20}, series = {Proceedings of Machine Learning Research}, address = {South Garden Hotels and Resorts, Taoyuan, Taiwain}, month = {14--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v20/sukhbaatar11/sukhbaatar11.pdf}, url = {https://proceedings.mlr.press/v20/sukhbaatar11.html}, abstract = {We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here.} }
Endnote
%0 Conference Paper %T Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets %A Sainbaya Sukhbaatar %A Takaki Makino %A Kazuyuki Aihara %A Takashi Chikayama %B Proceedings of the Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2011 %E Chun-Nan Hsu %E Wee Sun Lee %F pmlr-v20-sukhbaatar11 %I PMLR %P 231--246 %U https://proceedings.mlr.press/v20/sukhbaatar11.html %V 20 %X We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here.
RIS
TY - CPAPER TI - Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets AU - Sainbaya Sukhbaatar AU - Takaki Makino AU - Kazuyuki Aihara AU - Takashi Chikayama BT - Proceedings of the Asian Conference on Machine Learning DA - 2011/11/17 ED - Chun-Nan Hsu ED - Wee Sun Lee ID - pmlr-v20-sukhbaatar11 PB - PMLR DP - Proceedings of Machine Learning Research VL - 20 SP - 231 EP - 246 L1 - http://proceedings.mlr.press/v20/sukhbaatar11/sukhbaatar11.pdf UR - https://proceedings.mlr.press/v20/sukhbaatar11.html AB - We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here. ER -
APA
Sukhbaatar, S., Makino, T., Aihara, K. & Chikayama, T.. (2011). Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets. Proceedings of the Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 20:231-246 Available from https://proceedings.mlr.press/v20/sukhbaatar11.html.

Related Material