Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models

Taj Jones-Mccormick, Aukosh Jagannath, Subhabrata Sen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:28350-28376, 2025.

Abstract

Unsupervised pre-training and transfer learning are commonly used techniques to initialize training algorithms for neural networks, particularly in settings with limited labeled data. In this paper, we study the effects of unsupervised pre-training and transfer learning on the sample complexity of high-dimensional supervised learning. Specifically, we consider the problem of training a single-layer neural network via online stochastic gradient descent. We establish that pre-training and transfer learning (under concept shift) reduce sample complexity by polynomial factors (in the dimension) under very general assumptions. We also uncover some surprising settings where pre-training grants exponential improvement over random initialization in terms of sample complexity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jones-mccormick25a, title = {Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models}, author = {Jones-Mccormick, Taj and Jagannath, Aukosh and Sen, Subhabrata}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {28350--28376}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jones-mccormick25a/jones-mccormick25a.pdf}, url = {https://proceedings.mlr.press/v267/jones-mccormick25a.html}, abstract = {Unsupervised pre-training and transfer learning are commonly used techniques to initialize training algorithms for neural networks, particularly in settings with limited labeled data. In this paper, we study the effects of unsupervised pre-training and transfer learning on the sample complexity of high-dimensional supervised learning. Specifically, we consider the problem of training a single-layer neural network via online stochastic gradient descent. We establish that pre-training and transfer learning (under concept shift) reduce sample complexity by polynomial factors (in the dimension) under very general assumptions. We also uncover some surprising settings where pre-training grants exponential improvement over random initialization in terms of sample complexity.} }
Endnote
%0 Conference Paper %T Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models %A Taj Jones-Mccormick %A Aukosh Jagannath %A Subhabrata Sen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jones-mccormick25a %I PMLR %P 28350--28376 %U https://proceedings.mlr.press/v267/jones-mccormick25a.html %V 267 %X Unsupervised pre-training and transfer learning are commonly used techniques to initialize training algorithms for neural networks, particularly in settings with limited labeled data. In this paper, we study the effects of unsupervised pre-training and transfer learning on the sample complexity of high-dimensional supervised learning. Specifically, we consider the problem of training a single-layer neural network via online stochastic gradient descent. We establish that pre-training and transfer learning (under concept shift) reduce sample complexity by polynomial factors (in the dimension) under very general assumptions. We also uncover some surprising settings where pre-training grants exponential improvement over random initialization in terms of sample complexity.
APA
Jones-Mccormick, T., Jagannath, A. & Sen, S.. (2025). Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:28350-28376 Available from https://proceedings.mlr.press/v267/jones-mccormick25a.html.

Related Material