Asymptotic Analysis of Meta-learning as a Recommendation Problem

Zhengying Liu, Isabelle Guyon
AAAI Workshop on Meta-Learning and MetaDL Challenge, PMLR 140:100-114, 2021.

Abstract

Meta-learning tackles various means of learning from past tasks to perform new tasks better. In this paper, we focus on one particular statement of meta-learning: learning to recommend algorithms. We focus on a finite number of algorithms, which can be executed on tasks drawn i.i.d. according to a “meta-distribution”. We are interested in generalization performance of meta-predict strategies, i.e., the expected algorithm performances on new tasks drawn from the same meta-distribution. Assuming the perfect knowledge of the meta-distribution (i.e., in the limit of a very large number of training tasks), we ask ourselves under which conditions algorithm recommendation can benefit from meta-learning, and thus, in some sense, “defeat” the No-Free-Lunch theorem. We analyze four meta-predict strategies: Random, Mean, Greedy and Optimal. We identify optimality conditions for such strategies. We also define a notion of meta-learning complexity as the cardinal of the minimal clique of complementary algorithms. We illustrate our findings on experiments conducted on artificial and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v140-liu21a, title = {Asymptotic Analysis of Meta-learning as a Recommendation Problem}, author = {Liu, Zhengying and Guyon, Isabelle}, booktitle = {AAAI Workshop on Meta-Learning and MetaDL Challenge}, pages = {100--114}, year = {2021}, editor = {Guyon, Isabelle and van Rijn, Jan N. and Treguer, Sébastien and Vanschoren, Joaquin}, volume = {140}, series = {Proceedings of Machine Learning Research}, month = {09 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v140/liu21a/liu21a.pdf}, url = {https://proceedings.mlr.press/v140/liu21a.html}, abstract = {Meta-learning tackles various means of learning from past tasks to perform new tasks better. In this paper, we focus on one particular statement of meta-learning: learning to recommend algorithms. We focus on a finite number of algorithms, which can be executed on tasks drawn i.i.d. according to a “meta-distribution”. We are interested in generalization performance of meta-predict strategies, i.e., the expected algorithm performances on new tasks drawn from the same meta-distribution. Assuming the perfect knowledge of the meta-distribution (i.e., in the limit of a very large number of training tasks), we ask ourselves under which conditions algorithm recommendation can benefit from meta-learning, and thus, in some sense, “defeat” the No-Free-Lunch theorem. We analyze four meta-predict strategies: Random, Mean, Greedy and Optimal. We identify optimality conditions for such strategies. We also define a notion of meta-learning complexity as the cardinal of the minimal clique of complementary algorithms. We illustrate our findings on experiments conducted on artificial and real data.} }
Endnote
%0 Conference Paper %T Asymptotic Analysis of Meta-learning as a Recommendation Problem %A Zhengying Liu %A Isabelle Guyon %B AAAI Workshop on Meta-Learning and MetaDL Challenge %C Proceedings of Machine Learning Research %D 2021 %E Isabelle Guyon %E Jan N. van Rijn %E Sébastien Treguer %E Joaquin Vanschoren %F pmlr-v140-liu21a %I PMLR %P 100--114 %U https://proceedings.mlr.press/v140/liu21a.html %V 140 %X Meta-learning tackles various means of learning from past tasks to perform new tasks better. In this paper, we focus on one particular statement of meta-learning: learning to recommend algorithms. We focus on a finite number of algorithms, which can be executed on tasks drawn i.i.d. according to a “meta-distribution”. We are interested in generalization performance of meta-predict strategies, i.e., the expected algorithm performances on new tasks drawn from the same meta-distribution. Assuming the perfect knowledge of the meta-distribution (i.e., in the limit of a very large number of training tasks), we ask ourselves under which conditions algorithm recommendation can benefit from meta-learning, and thus, in some sense, “defeat” the No-Free-Lunch theorem. We analyze four meta-predict strategies: Random, Mean, Greedy and Optimal. We identify optimality conditions for such strategies. We also define a notion of meta-learning complexity as the cardinal of the minimal clique of complementary algorithms. We illustrate our findings on experiments conducted on artificial and real data.
APA
Liu, Z. & Guyon, I.. (2021). Asymptotic Analysis of Meta-learning as a Recommendation Problem. AAAI Workshop on Meta-Learning and MetaDL Challenge, in Proceedings of Machine Learning Research 140:100-114 Available from https://proceedings.mlr.press/v140/liu21a.html.

Related Material