Search-TTA: A Multi-Modal Test-Time Adaptation Framework for Visual Search in the Wild

Derek Ming Siang Tan, Shailes Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Adrien Sartoretti
Proceedings of The 9th Conference on Robot Learning, PMLR 305:2093-2120, 2025.

Abstract

To perform autonomous visual search for environmental monitoring, a robot may leverage satellite imagery as a prior map. This can help inform coarse, high level search and exploration strategies, even when such images lack sufficient resolution to allow fine-grained, explicit visual recognition of targets. However, there are some challenges to overcome with using satellite images to direct visual search. For one, targets that are unseen in satellite images are underrepresented (compared to real life) in most existing datasets, and thus vision models trained on these datasets fail to reason effectively based on indirect visual cues. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework that can accept text and/or image input. First, we pretrain a remote sensing image encoder to align with CLIP’s visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism. Through a feedback loop inspired by Spatial Poisson Point Processes, gradient updates (weighted by uncertainty) are used to correct (potentially inaccurate) predictions and improve search performance. To validate Search-TTA’s performance, we curate a visual search dataset based on internet-scale ecological data. We find that Search-TTA improves planner performance by up to 9.7%, particularly in cases with poor initial CLIP predictions. It also achieves comparable performance to state-of-the-art VLMs. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-tan25a, title = {Search-TTA: A Multi-Modal Test-Time Adaptation Framework for Visual Search in the Wild}, author = {Tan, Derek Ming Siang and Shailesh, Shailes and Liu, Boyang and Raj, Alok and Ang, Qi Xuan and Dai, Weiheng and Duhan, Tanishq and Chiun, Jimmy and Cao, Yuhong and Shkurti, Florian and Sartoretti, Guillaume Adrien}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {2093--2120}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/tan25a/tan25a.pdf}, url = {https://proceedings.mlr.press/v305/tan25a.html}, abstract = {To perform autonomous visual search for environmental monitoring, a robot may leverage satellite imagery as a prior map. This can help inform coarse, high level search and exploration strategies, even when such images lack sufficient resolution to allow fine-grained, explicit visual recognition of targets. However, there are some challenges to overcome with using satellite images to direct visual search. For one, targets that are unseen in satellite images are underrepresented (compared to real life) in most existing datasets, and thus vision models trained on these datasets fail to reason effectively based on indirect visual cues. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework that can accept text and/or image input. First, we pretrain a remote sensing image encoder to align with CLIP’s visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism. Through a feedback loop inspired by Spatial Poisson Point Processes, gradient updates (weighted by uncertainty) are used to correct (potentially inaccurate) predictions and improve search performance. To validate Search-TTA’s performance, we curate a visual search dataset based on internet-scale ecological data. We find that Search-TTA improves planner performance by up to 9.7%, particularly in cases with poor initial CLIP predictions. It also achieves comparable performance to state-of-the-art VLMs. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.} }
Endnote
%0 Conference Paper %T Search-TTA: A Multi-Modal Test-Time Adaptation Framework for Visual Search in the Wild %A Derek Ming Siang Tan %A Shailes Shailesh %A Boyang Liu %A Alok Raj %A Qi Xuan Ang %A Weiheng Dai %A Tanishq Duhan %A Jimmy Chiun %A Yuhong Cao %A Florian Shkurti %A Guillaume Adrien Sartoretti %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-tan25a %I PMLR %P 2093--2120 %U https://proceedings.mlr.press/v305/tan25a.html %V 305 %X To perform autonomous visual search for environmental monitoring, a robot may leverage satellite imagery as a prior map. This can help inform coarse, high level search and exploration strategies, even when such images lack sufficient resolution to allow fine-grained, explicit visual recognition of targets. However, there are some challenges to overcome with using satellite images to direct visual search. For one, targets that are unseen in satellite images are underrepresented (compared to real life) in most existing datasets, and thus vision models trained on these datasets fail to reason effectively based on indirect visual cues. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework that can accept text and/or image input. First, we pretrain a remote sensing image encoder to align with CLIP’s visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism. Through a feedback loop inspired by Spatial Poisson Point Processes, gradient updates (weighted by uncertainty) are used to correct (potentially inaccurate) predictions and improve search performance. To validate Search-TTA’s performance, we curate a visual search dataset based on internet-scale ecological data. We find that Search-TTA improves planner performance by up to 9.7%, particularly in cases with poor initial CLIP predictions. It also achieves comparable performance to state-of-the-art VLMs. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.
APA
Tan, D.M.S., Shailesh, S., Liu, B., Raj, A., Ang, Q.X., Dai, W., Duhan, T., Chiun, J., Cao, Y., Shkurti, F. & Sartoretti, G.A.. (2025). Search-TTA: A Multi-Modal Test-Time Adaptation Framework for Visual Search in the Wild. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:2093-2120 Available from https://proceedings.mlr.press/v305/tan25a.html.

Related Material