Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition

Yuke Li, Guangyi Chen, Ben Abramowitz, Stefano Anzellotti, Donglai Wei
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:27499-27514, 2024.

Abstract

Few-shot action recognition aims at quickly adapting a pre-trained model to the novel data with a distribution shift using only a limited number of samples. Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model. We therefore propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer. To identify the temporally invariant and variant representations, we employ the causal representation learning methods for unsupervised pertaining, and then tune the classifier with supervisions in next stage. Specifically, we assume the domain information can be well estimated and the pre-trained temporal dynamic generation and transition models can be well transferred. During adaptation, we fix the transferable temporal dynamics and update the image encoder and domain estimator. The efficacy of our approach is revealed by the superior accuracy of CDTD over leading alternatives across standard few-shot action recognition datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24h, title = {Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition}, author = {Li, Yuke and Chen, Guangyi and Abramowitz, Ben and Anzellotti, Stefano and Wei, Donglai}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {27499--27514}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24h/li24h.pdf}, url = {https://proceedings.mlr.press/v235/li24h.html}, abstract = {Few-shot action recognition aims at quickly adapting a pre-trained model to the novel data with a distribution shift using only a limited number of samples. Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model. We therefore propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer. To identify the temporally invariant and variant representations, we employ the causal representation learning methods for unsupervised pertaining, and then tune the classifier with supervisions in next stage. Specifically, we assume the domain information can be well estimated and the pre-trained temporal dynamic generation and transition models can be well transferred. During adaptation, we fix the transferable temporal dynamics and update the image encoder and domain estimator. The efficacy of our approach is revealed by the superior accuracy of CDTD over leading alternatives across standard few-shot action recognition datasets.} }
Endnote
%0 Conference Paper %T Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition %A Yuke Li %A Guangyi Chen %A Ben Abramowitz %A Stefano Anzellotti %A Donglai Wei %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24h %I PMLR %P 27499--27514 %U https://proceedings.mlr.press/v235/li24h.html %V 235 %X Few-shot action recognition aims at quickly adapting a pre-trained model to the novel data with a distribution shift using only a limited number of samples. Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model. We therefore propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer. To identify the temporally invariant and variant representations, we employ the causal representation learning methods for unsupervised pertaining, and then tune the classifier with supervisions in next stage. Specifically, we assume the domain information can be well estimated and the pre-trained temporal dynamic generation and transition models can be well transferred. During adaptation, we fix the transferable temporal dynamics and update the image encoder and domain estimator. The efficacy of our approach is revealed by the superior accuracy of CDTD over leading alternatives across standard few-shot action recognition datasets.
APA
Li, Y., Chen, G., Abramowitz, B., Anzellotti, S. & Wei, D.. (2024). Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:27499-27514 Available from https://proceedings.mlr.press/v235/li24h.html.

Related Material