Trace norm regularization for multi-task learning with scarce data

Etienne Boursier, Mikhail Konobeev, Nicolas Flammarion
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1303-1327, 2022.

Abstract

Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-boursier22a, title = {Trace norm regularization for multi-task learning with scarce data}, author = {Boursier, Etienne and Konobeev, Mikhail and Flammarion, Nicolas}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {1303--1327}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/boursier22a/boursier22a.pdf}, url = {https://proceedings.mlr.press/v178/boursier22a.html}, abstract = {Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.} }
Endnote
%0 Conference Paper %T Trace norm regularization for multi-task learning with scarce data %A Etienne Boursier %A Mikhail Konobeev %A Nicolas Flammarion %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-boursier22a %I PMLR %P 1303--1327 %U https://proceedings.mlr.press/v178/boursier22a.html %V 178 %X Multi-task learning leverages structural similarities between multiple tasks to learn despite very few samples. Motivated by the recent success of neural networks applied to data-scarce tasks, we consider a linear low-dimensional shared representation model. Despite an extensive literature, existing theoretical results either guarantee weak estimation rates or require a large number of samples per task. This work provides the first estimation error bound for the trace norm regularized estimator when the number of samples per task is small. The advantages of trace norm regularization for learning data-scarce tasks extend to meta-learning and are confirmed empirically on synthetic datasets.
APA
Boursier, E., Konobeev, M. & Flammarion, N.. (2022). Trace norm regularization for multi-task learning with scarce data. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:1303-1327 Available from https://proceedings.mlr.press/v178/boursier22a.html.

Related Material