Prototypical Transformer As Unified Motion Learners

Cheng Han, Yawen Lu, Guohao Sun, James Chenhao Liang, Zhiwen Cao, Qifan Wang, Qiang Guan, Sohail Dianat, Raghuveer Rao, Tong Geng, Zhiqiang Tao, Dongfang Liu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:17416-17436, 2024.

Abstract

In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-han24d, title = {Prototypical Transformer As Unified Motion Learners}, author = {Han, Cheng and Lu, Yawen and Sun, Guohao and Liang, James Chenhao and Cao, Zhiwen and Wang, Qifan and Guan, Qiang and Dianat, Sohail and Rao, Raghuveer and Geng, Tong and Tao, Zhiqiang and Liu, Dongfang}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {17416--17436}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/han24d/han24d.pdf}, url = {https://proceedings.mlr.press/v235/han24d.html}, abstract = {In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.} }
Endnote
%0 Conference Paper %T Prototypical Transformer As Unified Motion Learners %A Cheng Han %A Yawen Lu %A Guohao Sun %A James Chenhao Liang %A Zhiwen Cao %A Qifan Wang %A Qiang Guan %A Sohail Dianat %A Raghuveer Rao %A Tong Geng %A Zhiqiang Tao %A Dongfang Liu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-han24d %I PMLR %P 17416--17436 %U https://proceedings.mlr.press/v235/han24d.html %V 235 %X In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective. ProtoFormer seamlessly integrates prototype learning with Transformer by thoughtfully considering motion dynamics, introducing two innovative designs. First, Cross-Attention Prototyping discovers prototypes based on signature motion patterns, providing transparency in understanding motion scenes. Second, Latent Synchronization guides feature representation learning via prototypes, effectively mitigating the problem of motion uncertainty. Empirical results demonstrate that our approach achieves competitive performance on popular motion tasks such as optical flow and scene depth. Furthermore, it exhibits generality across various downstream tasks, including object tracking and video stabilization.
APA
Han, C., Lu, Y., Sun, G., Liang, J.C., Cao, Z., Wang, Q., Guan, Q., Dianat, S., Rao, R., Geng, T., Tao, Z. & Liu, D.. (2024). Prototypical Transformer As Unified Motion Learners. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:17416-17436 Available from https://proceedings.mlr.press/v235/han24d.html.

Related Material