On Data Efficiency of Meta-learning

Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1369-1377, 2021.

Abstract

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms—their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, andProtoNets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-al-shedivat21a, title = { On Data Efficiency of Meta-learning }, author = {Al-Shedivat, Maruan and Li, Liam and Xing, Eric and Talwalkar, Ameet}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1369--1377}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/al-shedivat21a/al-shedivat21a.pdf}, url = {https://proceedings.mlr.press/v130/al-shedivat21a.html}, abstract = { Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms—their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, andProtoNets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime. } }
Endnote
%0 Conference Paper %T On Data Efficiency of Meta-learning %A Maruan Al-Shedivat %A Liam Li %A Eric Xing %A Ameet Talwalkar %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-al-shedivat21a %I PMLR %P 1369--1377 %U https://proceedings.mlr.press/v130/al-shedivat21a.html %V 130 %X Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms—their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, andProtoNets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.
APA
Al-Shedivat, M., Li, L., Xing, E. & Talwalkar, A.. (2021). On Data Efficiency of Meta-learning . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1369-1377 Available from https://proceedings.mlr.press/v130/al-shedivat21a.html.

Related Material