Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?

Tom Jacobs, Chao Zhou, Rebekka Burkholz
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:26673-26699, 2025.

Abstract

Implicit bias plays an important role in explaining how overparameterized models generalize well. Explicit regularization like weight decay is often employed in addition to prevent overfitting. While both concepts have been studied separately, in practice, they often act in tandem. Understanding their interplay is key to controlling the shape and strength of implicit bias, as it can be modified by explicit regularization. To this end, we incorporate explicit regularization into the mirror flow framework and analyze its lasting effects on the geometry of the training dynamics, covering three distinct effects: positional bias, type of bias, and range shrinking. Our analytical approach encompasses a broad class of problems, including sparse coding, matrix sensing, single-layer attention, and LoRA, for which we demonstrate the utility of our insights. To exploit the lasting effect of regularization and highlight the potential benefit of dynamic weight decay schedules, we propose to switch off weight decay during training, which can improve generalization, as we demonstrate in experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jacobs25a, title = {Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?}, author = {Jacobs, Tom and Zhou, Chao and Burkholz, Rebekka}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {26673--26699}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jacobs25a/jacobs25a.pdf}, url = {https://proceedings.mlr.press/v267/jacobs25a.html}, abstract = {Implicit bias plays an important role in explaining how overparameterized models generalize well. Explicit regularization like weight decay is often employed in addition to prevent overfitting. While both concepts have been studied separately, in practice, they often act in tandem. Understanding their interplay is key to controlling the shape and strength of implicit bias, as it can be modified by explicit regularization. To this end, we incorporate explicit regularization into the mirror flow framework and analyze its lasting effects on the geometry of the training dynamics, covering three distinct effects: positional bias, type of bias, and range shrinking. Our analytical approach encompasses a broad class of problems, including sparse coding, matrix sensing, single-layer attention, and LoRA, for which we demonstrate the utility of our insights. To exploit the lasting effect of regularization and highlight the potential benefit of dynamic weight decay schedules, we propose to switch off weight decay during training, which can improve generalization, as we demonstrate in experiments.} }
Endnote
%0 Conference Paper %T Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias? %A Tom Jacobs %A Chao Zhou %A Rebekka Burkholz %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jacobs25a %I PMLR %P 26673--26699 %U https://proceedings.mlr.press/v267/jacobs25a.html %V 267 %X Implicit bias plays an important role in explaining how overparameterized models generalize well. Explicit regularization like weight decay is often employed in addition to prevent overfitting. While both concepts have been studied separately, in practice, they often act in tandem. Understanding their interplay is key to controlling the shape and strength of implicit bias, as it can be modified by explicit regularization. To this end, we incorporate explicit regularization into the mirror flow framework and analyze its lasting effects on the geometry of the training dynamics, covering three distinct effects: positional bias, type of bias, and range shrinking. Our analytical approach encompasses a broad class of problems, including sparse coding, matrix sensing, single-layer attention, and LoRA, for which we demonstrate the utility of our insights. To exploit the lasting effect of regularization and highlight the potential benefit of dynamic weight decay schedules, we propose to switch off weight decay during training, which can improve generalization, as we demonstrate in experiments.
APA
Jacobs, T., Zhou, C. & Burkholz, R.. (2025). Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:26673-26699 Available from https://proceedings.mlr.press/v267/jacobs25a.html.

Related Material