Meta-learning Task-specific Regularization Weights for Few-shot Linear Regression

Tomoharu Iwata, Atsutoshi Kumagai, Yasutoshi Ida
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3052-3060, 2025.

Abstract

We propose a few-shot learning method for linear regression, which learns how to choose regularization weights from multiple tasks with different feature spaces, and uses the knowledge for unseen tasks. Linear regression is ubiquitous in a wide variety of fields. Although regularization weight tuning is crucial to performance, it is difficult when only a small amount of training data are available. In the proposed method, task-specific regularization weights are generated using a neural network-based model by taking a task-specific training dataset as input, where our model is shared across all tasks. For each task, linear coefficients are optimized by minimizing the squared loss with an L2 regularizer using the generated regularization weights and the training dataset. Our model is meta-learned by minimizing the expected test error of linear regression with the task-specific coefficients using various training datasets. In our experiments using synthetic and real-world datasets, we demonstrate the effectiveness of the proposed method on few-shot regression tasks compared with existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-iwata25b, title = {Meta-learning Task-specific Regularization Weights for Few-shot Linear Regression}, author = {Iwata, Tomoharu and Kumagai, Atsutoshi and Ida, Yasutoshi}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3052--3060}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/iwata25b/iwata25b.pdf}, url = {https://proceedings.mlr.press/v258/iwata25b.html}, abstract = {We propose a few-shot learning method for linear regression, which learns how to choose regularization weights from multiple tasks with different feature spaces, and uses the knowledge for unseen tasks. Linear regression is ubiquitous in a wide variety of fields. Although regularization weight tuning is crucial to performance, it is difficult when only a small amount of training data are available. In the proposed method, task-specific regularization weights are generated using a neural network-based model by taking a task-specific training dataset as input, where our model is shared across all tasks. For each task, linear coefficients are optimized by minimizing the squared loss with an L2 regularizer using the generated regularization weights and the training dataset. Our model is meta-learned by minimizing the expected test error of linear regression with the task-specific coefficients using various training datasets. In our experiments using synthetic and real-world datasets, we demonstrate the effectiveness of the proposed method on few-shot regression tasks compared with existing methods.} }
Endnote
%0 Conference Paper %T Meta-learning Task-specific Regularization Weights for Few-shot Linear Regression %A Tomoharu Iwata %A Atsutoshi Kumagai %A Yasutoshi Ida %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-iwata25b %I PMLR %P 3052--3060 %U https://proceedings.mlr.press/v258/iwata25b.html %V 258 %X We propose a few-shot learning method for linear regression, which learns how to choose regularization weights from multiple tasks with different feature spaces, and uses the knowledge for unseen tasks. Linear regression is ubiquitous in a wide variety of fields. Although regularization weight tuning is crucial to performance, it is difficult when only a small amount of training data are available. In the proposed method, task-specific regularization weights are generated using a neural network-based model by taking a task-specific training dataset as input, where our model is shared across all tasks. For each task, linear coefficients are optimized by minimizing the squared loss with an L2 regularizer using the generated regularization weights and the training dataset. Our model is meta-learned by minimizing the expected test error of linear regression with the task-specific coefficients using various training datasets. In our experiments using synthetic and real-world datasets, we demonstrate the effectiveness of the proposed method on few-shot regression tasks compared with existing methods.
APA
Iwata, T., Kumagai, A. & Ida, Y.. (2025). Meta-learning Task-specific Regularization Weights for Few-shot Linear Regression. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3052-3060 Available from https://proceedings.mlr.press/v258/iwata25b.html.

Related Material