Adaptive Meta-Learning via data-dependent PAC-Bayes bounds

Lior Friedman, Ron Meir
Proceedings of The 2nd Conference on Lifelong Learning Agents, PMLR 232:796-810, 2023.

Abstract

Meta-learning aims to extract common knowledge from similar training tasks in order to facilitate efficient and effective learning on future tasks. Several recent works have extended PAC-Bayes generalization error bounds to the meta-learning setting. By doing so, prior knowledge can be incorporated in the form of a distribution over hypotheses that is expected to lead to low error on new tasks that are similar to those that have been previously observed. In this work, we develop novel bounds for the generalization error on test tasks based on recent data-dependent bounds and provide a novel algorithm for adapting prior knowledge to downstream tasks in a potentially more effective manner. We demonstrate the effectiveness of our algorithm numerically for few-shot image classification tasks with deep neural networks and show a significant reduction in generalization error without any additional adaptation data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v232-friedman23a, title = {Adaptive Meta-Learning via data-dependent PAC-Bayes bounds}, author = {Friedman, Lior and Meir, Ron}, booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents}, pages = {796--810}, year = {2023}, editor = {Chandar, Sarath and Pascanu, Razvan and Sedghi, Hanie and Precup, Doina}, volume = {232}, series = {Proceedings of Machine Learning Research}, month = {22--25 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v232/friedman23a/friedman23a.pdf}, url = {https://proceedings.mlr.press/v232/friedman23a.html}, abstract = {Meta-learning aims to extract common knowledge from similar training tasks in order to facilitate efficient and effective learning on future tasks. Several recent works have extended PAC-Bayes generalization error bounds to the meta-learning setting. By doing so, prior knowledge can be incorporated in the form of a distribution over hypotheses that is expected to lead to low error on new tasks that are similar to those that have been previously observed. In this work, we develop novel bounds for the generalization error on test tasks based on recent data-dependent bounds and provide a novel algorithm for adapting prior knowledge to downstream tasks in a potentially more effective manner. We demonstrate the effectiveness of our algorithm numerically for few-shot image classification tasks with deep neural networks and show a significant reduction in generalization error without any additional adaptation data.} }
Endnote
%0 Conference Paper %T Adaptive Meta-Learning via data-dependent PAC-Bayes bounds %A Lior Friedman %A Ron Meir %B Proceedings of The 2nd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2023 %E Sarath Chandar %E Razvan Pascanu %E Hanie Sedghi %E Doina Precup %F pmlr-v232-friedman23a %I PMLR %P 796--810 %U https://proceedings.mlr.press/v232/friedman23a.html %V 232 %X Meta-learning aims to extract common knowledge from similar training tasks in order to facilitate efficient and effective learning on future tasks. Several recent works have extended PAC-Bayes generalization error bounds to the meta-learning setting. By doing so, prior knowledge can be incorporated in the form of a distribution over hypotheses that is expected to lead to low error on new tasks that are similar to those that have been previously observed. In this work, we develop novel bounds for the generalization error on test tasks based on recent data-dependent bounds and provide a novel algorithm for adapting prior knowledge to downstream tasks in a potentially more effective manner. We demonstrate the effectiveness of our algorithm numerically for few-shot image classification tasks with deep neural networks and show a significant reduction in generalization error without any additional adaptation data.
APA
Friedman, L. & Meir, R.. (2023). Adaptive Meta-Learning via data-dependent PAC-Bayes bounds. Proceedings of The 2nd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 232:796-810 Available from https://proceedings.mlr.press/v232/friedman23a.html.

Related Material