Evaluation of Test-Time Adaptation Under Computational Time Constraints

Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Yaser Alhuwaider, Merey Ramazanova, Juan Camilo Perez, Zhipeng Cai, Matthias Müller, Bernard Ghanem
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:976-991, 2024.

Abstract

This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their impressive performance usually comes at the cost of significantly increased computation budgets. Current evaluation protocols overlook the effect of this extra computation cost, affecting their real-world applicability. To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method’s adaptation speed. We apply our proposed protocol to benchmark several TTA methods on multiple datasets and scenarios. Extensive experiments shows that, when accounting for inference speed, simple and fast approaches can outperform more sophisticated but slower methods. For example, SHOT from 2020, outperforms the state-of-the-art method SAR from 2023 under our online setting. Our results reveal the importance of developing practical TTA methods that are both accurate and efficient.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-alfarra24a, title = {Evaluation of Test-Time Adaptation Under Computational Time Constraints}, author = {Alfarra, Motasem and Itani, Hani and Pardo, Alejandro and Alhuwaider, Shyma Yaser and Ramazanova, Merey and Perez, Juan Camilo and Cai, Zhipeng and M\"{u}ller, Matthias and Ghanem, Bernard}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {976--991}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/alfarra24a/alfarra24a.pdf}, url = {https://proceedings.mlr.press/v235/alfarra24a.html}, abstract = {This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their impressive performance usually comes at the cost of significantly increased computation budgets. Current evaluation protocols overlook the effect of this extra computation cost, affecting their real-world applicability. To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method’s adaptation speed. We apply our proposed protocol to benchmark several TTA methods on multiple datasets and scenarios. Extensive experiments shows that, when accounting for inference speed, simple and fast approaches can outperform more sophisticated but slower methods. For example, SHOT from 2020, outperforms the state-of-the-art method SAR from 2023 under our online setting. Our results reveal the importance of developing practical TTA methods that are both accurate and efficient.} }
Endnote
%0 Conference Paper %T Evaluation of Test-Time Adaptation Under Computational Time Constraints %A Motasem Alfarra %A Hani Itani %A Alejandro Pardo %A Shyma Yaser Alhuwaider %A Merey Ramazanova %A Juan Camilo Perez %A Zhipeng Cai %A Matthias Müller %A Bernard Ghanem %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-alfarra24a %I PMLR %P 976--991 %U https://proceedings.mlr.press/v235/alfarra24a.html %V 235 %X This paper proposes a novel online evaluation protocol for Test Time Adaptation (TTA) methods, which penalizes slower methods by providing them with fewer samples for adaptation. TTA methods leverage unlabeled data at test time to adapt to distribution shifts. Though many effective methods have been proposed, their impressive performance usually comes at the cost of significantly increased computation budgets. Current evaluation protocols overlook the effect of this extra computation cost, affecting their real-world applicability. To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method’s adaptation speed. We apply our proposed protocol to benchmark several TTA methods on multiple datasets and scenarios. Extensive experiments shows that, when accounting for inference speed, simple and fast approaches can outperform more sophisticated but slower methods. For example, SHOT from 2020, outperforms the state-of-the-art method SAR from 2023 under our online setting. Our results reveal the importance of developing practical TTA methods that are both accurate and efficient.
APA
Alfarra, M., Itani, H., Pardo, A., Alhuwaider, S.Y., Ramazanova, M., Perez, J.C., Cai, Z., Müller, M. & Ghanem, B.. (2024). Evaluation of Test-Time Adaptation Under Computational Time Constraints. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:976-991 Available from https://proceedings.mlr.press/v235/alfarra24a.html.

Related Material