Cross-regularization: Adaptive Model Complexity through Validation Gradients

Carlos Stein Brito
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:5558-5577, 2025.

Abstract

Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by computing validation gradients that directly adapt regularization parameters during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima with computational cost scaling only in regularization dimension. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation and uncertainty calibration while maintaining single-run efficiency through a simple gradient-based approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-brito25a, title = {Cross-regularization: Adaptive Model Complexity through Validation Gradients}, author = {Brito, Carlos Stein}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {5558--5577}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/brito25a/brito25a.pdf}, url = {https://proceedings.mlr.press/v267/brito25a.html}, abstract = {Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by computing validation gradients that directly adapt regularization parameters during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima with computational cost scaling only in regularization dimension. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation and uncertainty calibration while maintaining single-run efficiency through a simple gradient-based approach.} }
Endnote
%0 Conference Paper %T Cross-regularization: Adaptive Model Complexity through Validation Gradients %A Carlos Stein Brito %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-brito25a %I PMLR %P 5558--5577 %U https://proceedings.mlr.press/v267/brito25a.html %V 267 %X Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by computing validation gradients that directly adapt regularization parameters during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima with computational cost scaling only in regularization dimension. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation and uncertainty calibration while maintaining single-run efficiency through a simple gradient-based approach.
APA
Brito, C.S.. (2025). Cross-regularization: Adaptive Model Complexity through Validation Gradients. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:5558-5577 Available from https://proceedings.mlr.press/v267/brito25a.html.

Related Material