TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging

Chuang Liu, Hongyan Xu, Yichao Cao, Xiu Su, Zhe Qu, Tianfa Li, Shan An, Haogang Zhu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:39898-39914, 2025.

Abstract

Medical imaging faces significant challenges in single-domain generalization (SDG) due to the diversity of imaging devices and the variability among data collection centers. To address these challenges, we propose TinyMIG, a framework designed to transfer generalization capabilities from vision foundation models to medical imaging SDG. TinyMIG aims to enable lightweight specialized models to mimic the strong generalization capabilities of foundation models in terms of both global feature distribution and local fine-grained details during training. Specifically, for global feature distribution, we propose a Global Distribution Consistency Learning strategy that mimics the prior distributions of the foundation model layer by layer. For local fine-grained details, we further design a Localized Representation Alignment method, which promotes semantic alignment and generalization distillation between the specialized model and the foundation model. These mechanisms collectively enable the specialized model to achieve robust performance in diverse medical imaging scenarios. Extensive experiments on large-scale benchmarks demonstrate that TinyMIG, with extremely low computational cost, significantly outperforms state-of-the-art models, showcasing its superior SDG capabilities. All the code and model weights will be publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-liu25cf, title = {{T}iny{MIG}: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging}, author = {Liu, Chuang and Xu, Hongyan and Cao, Yichao and Su, Xiu and Qu, Zhe and Li, Tianfa and An, Shan and Zhu, Haogang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {39898--39914}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/liu25cf/liu25cf.pdf}, url = {https://proceedings.mlr.press/v267/liu25cf.html}, abstract = {Medical imaging faces significant challenges in single-domain generalization (SDG) due to the diversity of imaging devices and the variability among data collection centers. To address these challenges, we propose TinyMIG, a framework designed to transfer generalization capabilities from vision foundation models to medical imaging SDG. TinyMIG aims to enable lightweight specialized models to mimic the strong generalization capabilities of foundation models in terms of both global feature distribution and local fine-grained details during training. Specifically, for global feature distribution, we propose a Global Distribution Consistency Learning strategy that mimics the prior distributions of the foundation model layer by layer. For local fine-grained details, we further design a Localized Representation Alignment method, which promotes semantic alignment and generalization distillation between the specialized model and the foundation model. These mechanisms collectively enable the specialized model to achieve robust performance in diverse medical imaging scenarios. Extensive experiments on large-scale benchmarks demonstrate that TinyMIG, with extremely low computational cost, significantly outperforms state-of-the-art models, showcasing its superior SDG capabilities. All the code and model weights will be publicly available.} }
Endnote
%0 Conference Paper %T TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging %A Chuang Liu %A Hongyan Xu %A Yichao Cao %A Xiu Su %A Zhe Qu %A Tianfa Li %A Shan An %A Haogang Zhu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-liu25cf %I PMLR %P 39898--39914 %U https://proceedings.mlr.press/v267/liu25cf.html %V 267 %X Medical imaging faces significant challenges in single-domain generalization (SDG) due to the diversity of imaging devices and the variability among data collection centers. To address these challenges, we propose TinyMIG, a framework designed to transfer generalization capabilities from vision foundation models to medical imaging SDG. TinyMIG aims to enable lightweight specialized models to mimic the strong generalization capabilities of foundation models in terms of both global feature distribution and local fine-grained details during training. Specifically, for global feature distribution, we propose a Global Distribution Consistency Learning strategy that mimics the prior distributions of the foundation model layer by layer. For local fine-grained details, we further design a Localized Representation Alignment method, which promotes semantic alignment and generalization distillation between the specialized model and the foundation model. These mechanisms collectively enable the specialized model to achieve robust performance in diverse medical imaging scenarios. Extensive experiments on large-scale benchmarks demonstrate that TinyMIG, with extremely low computational cost, significantly outperforms state-of-the-art models, showcasing its superior SDG capabilities. All the code and model weights will be publicly available.
APA
Liu, C., Xu, H., Cao, Y., Su, X., Qu, Z., Li, T., An, S. & Zhu, H.. (2025). TinyMIG: Transferring Generalization from Vision Foundation Models to Single-Domain Medical Imaging. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:39898-39914 Available from https://proceedings.mlr.press/v267/liu25cf.html.

Related Material