Simplifying DINO via Coding Rate Regularization

Ziyang Wu, Jingyuan Zhang, Druv Pai, Xudong Wang, Chandan Singh, Jianwei Yang, Jianfeng Gao, Yi Ma
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:68036-68059, 2025.

Abstract

DINO and DINOv2 are two model families being widely used to learn representations from unlabeled imagery data at large scales. Their learned representations often enable state-of-the-art performance for downstream tasks, such as image classification and segmentation. However, they employ many empirically motivated design choices and their training pipelines are highly complex and unstable — many hyperparameters need to be carefully tuned to ensure that the representations do not collapse — which poses considerable difficulty to improving them or adapting them to new domains. In this work, we posit that we can remove most such-motivated idiosyncrasies in the pre-training pipelines, and only need to add an explicit coding rate term in the loss function to avoid collapse of the representations. As a result, we obtain highly simplified variants of the DINO and DINOv2 which we call SimDINO and SimDINOv2, respectively. Remarkably, these simplified models are more robust to different design choices, such as network architecture and hyperparameters, and they learn even higher-quality representations, measured by performance on downstream tasks, offering a Pareto improvement over the corresponding DINO and DINOv2 models. This work highlights the potential of using simplifying design principles to improve the empirical practice of deep learning. Code and model checkpoints are available at https://github.com/RobinWu218/SimDINO.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wu25ar, title = {Simplifying {DINO} via Coding Rate Regularization}, author = {Wu, Ziyang and Zhang, Jingyuan and Pai, Druv and Wang, Xudong and Singh, Chandan and Yang, Jianwei and Gao, Jianfeng and Ma, Yi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {68036--68059}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wu25ar/wu25ar.pdf}, url = {https://proceedings.mlr.press/v267/wu25ar.html}, abstract = {DINO and DINOv2 are two model families being widely used to learn representations from unlabeled imagery data at large scales. Their learned representations often enable state-of-the-art performance for downstream tasks, such as image classification and segmentation. However, they employ many empirically motivated design choices and their training pipelines are highly complex and unstable — many hyperparameters need to be carefully tuned to ensure that the representations do not collapse — which poses considerable difficulty to improving them or adapting them to new domains. In this work, we posit that we can remove most such-motivated idiosyncrasies in the pre-training pipelines, and only need to add an explicit coding rate term in the loss function to avoid collapse of the representations. As a result, we obtain highly simplified variants of the DINO and DINOv2 which we call SimDINO and SimDINOv2, respectively. Remarkably, these simplified models are more robust to different design choices, such as network architecture and hyperparameters, and they learn even higher-quality representations, measured by performance on downstream tasks, offering a Pareto improvement over the corresponding DINO and DINOv2 models. This work highlights the potential of using simplifying design principles to improve the empirical practice of deep learning. Code and model checkpoints are available at https://github.com/RobinWu218/SimDINO.} }
Endnote
%0 Conference Paper %T Simplifying DINO via Coding Rate Regularization %A Ziyang Wu %A Jingyuan Zhang %A Druv Pai %A Xudong Wang %A Chandan Singh %A Jianwei Yang %A Jianfeng Gao %A Yi Ma %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wu25ar %I PMLR %P 68036--68059 %U https://proceedings.mlr.press/v267/wu25ar.html %V 267 %X DINO and DINOv2 are two model families being widely used to learn representations from unlabeled imagery data at large scales. Their learned representations often enable state-of-the-art performance for downstream tasks, such as image classification and segmentation. However, they employ many empirically motivated design choices and their training pipelines are highly complex and unstable — many hyperparameters need to be carefully tuned to ensure that the representations do not collapse — which poses considerable difficulty to improving them or adapting them to new domains. In this work, we posit that we can remove most such-motivated idiosyncrasies in the pre-training pipelines, and only need to add an explicit coding rate term in the loss function to avoid collapse of the representations. As a result, we obtain highly simplified variants of the DINO and DINOv2 which we call SimDINO and SimDINOv2, respectively. Remarkably, these simplified models are more robust to different design choices, such as network architecture and hyperparameters, and they learn even higher-quality representations, measured by performance on downstream tasks, offering a Pareto improvement over the corresponding DINO and DINOv2 models. This work highlights the potential of using simplifying design principles to improve the empirical practice of deep learning. Code and model checkpoints are available at https://github.com/RobinWu218/SimDINO.
APA
Wu, Z., Zhang, J., Pai, D., Wang, X., Singh, C., Yang, J., Gao, J. & Ma, Y.. (2025). Simplifying DINO via Coding Rate Regularization. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:68036-68059 Available from https://proceedings.mlr.press/v267/wu25ar.html.

Related Material