[edit]
Cross-regularization: Adaptive Model Complexity through Validation Gradients
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:5558-5577, 2025.
Abstract
Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by computing validation gradients that directly adapt regularization parameters during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima with computational cost scaling only in regularization dimension. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation and uncertainty calibration while maintaining single-run efficiency through a simple gradient-based approach.