LearningtoLearn Stochastic Gradient Descent with Biased Regularization
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:15661575, 2019.
Abstract
We study the problem of learningtolearn: infer ring a learning algorithm that works well on a family of tasks sampled from an unknown distribution. As class of algorithms we consider Stochastic Gradient Descent (SGD) on the true risk regularized by the square euclidean distance from a bias vector. We present an average excess risk bound for such a learning algorithm that quantifies the potential benefit of using a bias vector with respect to the unbiased case. We then propose a novel metaalgorithm to estimate the bias term online from a sequence of observed tasks. The small memory footprint and low time complexity of our approach makes it appealing in practice while our theoretical analysis provides guarantees on the generalization properties of the metaalgorithm on new tasks. A key feature of our results is that, when the number of tasks grows and their vari ance is relatively small, our learningtolearn approach has a significant advantage over learning each task in isolation by standard SGD without a bias term. Numerical experiments demonstrate the effectiveness of our approach in practice.
Related Material


