You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption

Weihang Ran, Wei Yuan, Yinqiang Zheng
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:51140-51155, 2025.

Abstract

Damage to imaging systems and complex external environments often introduce corruption, which can impair the performance of deep learning models pretrained on high-quality image data. Previous methods have focused on restoring degraded images or fine-tuning models to adapt to out-of-distribution data. However, these approaches struggle with complex, unknown corruptions and often reduce model accuracy on high-quality data. Inspired by the use of warning colors and camouflage in the real world, we propose designing a robust appearance that can enhance model recognition of low-quality image data. Furthermore, we demonstrate that certain universal features in radiance fields can be applied across objects of the same class with different geometries. We also examine the impact of different proxy models on the transferability of robust appearances. Extensive experiments demonstrate the effectiveness of our proposed method, which outperforms existing image restoration and model fine-tuning approaches across different experimental settings, and retains effectiveness when transferred to models with different architectures. Code will be available at https://github.com/SilverRAN/YARM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ran25a, title = {You Always Recognize Me ({YARM}): Robust Texture Synthesis Against Multi-View Corruption}, author = {Ran, Weihang and Yuan, Wei and Zheng, Yinqiang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {51140--51155}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ran25a/ran25a.pdf}, url = {https://proceedings.mlr.press/v267/ran25a.html}, abstract = {Damage to imaging systems and complex external environments often introduce corruption, which can impair the performance of deep learning models pretrained on high-quality image data. Previous methods have focused on restoring degraded images or fine-tuning models to adapt to out-of-distribution data. However, these approaches struggle with complex, unknown corruptions and often reduce model accuracy on high-quality data. Inspired by the use of warning colors and camouflage in the real world, we propose designing a robust appearance that can enhance model recognition of low-quality image data. Furthermore, we demonstrate that certain universal features in radiance fields can be applied across objects of the same class with different geometries. We also examine the impact of different proxy models on the transferability of robust appearances. Extensive experiments demonstrate the effectiveness of our proposed method, which outperforms existing image restoration and model fine-tuning approaches across different experimental settings, and retains effectiveness when transferred to models with different architectures. Code will be available at https://github.com/SilverRAN/YARM.} }
Endnote
%0 Conference Paper %T You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption %A Weihang Ran %A Wei Yuan %A Yinqiang Zheng %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ran25a %I PMLR %P 51140--51155 %U https://proceedings.mlr.press/v267/ran25a.html %V 267 %X Damage to imaging systems and complex external environments often introduce corruption, which can impair the performance of deep learning models pretrained on high-quality image data. Previous methods have focused on restoring degraded images or fine-tuning models to adapt to out-of-distribution data. However, these approaches struggle with complex, unknown corruptions and often reduce model accuracy on high-quality data. Inspired by the use of warning colors and camouflage in the real world, we propose designing a robust appearance that can enhance model recognition of low-quality image data. Furthermore, we demonstrate that certain universal features in radiance fields can be applied across objects of the same class with different geometries. We also examine the impact of different proxy models on the transferability of robust appearances. Extensive experiments demonstrate the effectiveness of our proposed method, which outperforms existing image restoration and model fine-tuning approaches across different experimental settings, and retains effectiveness when transferred to models with different architectures. Code will be available at https://github.com/SilverRAN/YARM.
APA
Ran, W., Yuan, W. & Zheng, Y.. (2025). You Always Recognize Me (YARM): Robust Texture Synthesis Against Multi-View Corruption. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:51140-51155 Available from https://proceedings.mlr.press/v267/ran25a.html.

Related Material