Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning

Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, Tianyi Chen
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10-32, 2022.

Abstract

Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML can be challenging due to the innate bilevel problem structure. Specifically, the loss landscape of MAML is much more complex with possibly more saddle points and local minimizers than its empirical risk minimization counterpart. To address this challenge, we leverage the recently invented sharpness-aware minimization and develop a sharpness-aware MAML approach that we term Sharp-MAML. We empirically demonstrate that Sharp-MAML and its computation-efficient variant can outperform the plain-vanilla MAML baseline (e.g., +3% accuracy on Mini-Imagenet). We complement the empirical study with the convergence rate analysis and the generalization bound of Sharp-MAML. To the best of our knowledge, this is the first empirical and theoretical study on sharpness-aware minimization in the context of bilevel learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-abbas22b, title = {Sharp-{MAML}: Sharpness-Aware Model-Agnostic Meta Learning}, author = {Abbas, Momin and Xiao, Quan and Chen, Lisha and Chen, Pin-Yu and Chen, Tianyi}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10--32}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/abbas22b/abbas22b.pdf}, url = {https://proceedings.mlr.press/v162/abbas22b.html}, abstract = {Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML can be challenging due to the innate bilevel problem structure. Specifically, the loss landscape of MAML is much more complex with possibly more saddle points and local minimizers than its empirical risk minimization counterpart. To address this challenge, we leverage the recently invented sharpness-aware minimization and develop a sharpness-aware MAML approach that we term Sharp-MAML. We empirically demonstrate that Sharp-MAML and its computation-efficient variant can outperform the plain-vanilla MAML baseline (e.g., +3% accuracy on Mini-Imagenet). We complement the empirical study with the convergence rate analysis and the generalization bound of Sharp-MAML. To the best of our knowledge, this is the first empirical and theoretical study on sharpness-aware minimization in the context of bilevel learning.} }
Endnote
%0 Conference Paper %T Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning %A Momin Abbas %A Quan Xiao %A Lisha Chen %A Pin-Yu Chen %A Tianyi Chen %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-abbas22b %I PMLR %P 10--32 %U https://proceedings.mlr.press/v162/abbas22b.html %V 162 %X Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML can be challenging due to the innate bilevel problem structure. Specifically, the loss landscape of MAML is much more complex with possibly more saddle points and local minimizers than its empirical risk minimization counterpart. To address this challenge, we leverage the recently invented sharpness-aware minimization and develop a sharpness-aware MAML approach that we term Sharp-MAML. We empirically demonstrate that Sharp-MAML and its computation-efficient variant can outperform the plain-vanilla MAML baseline (e.g., +3% accuracy on Mini-Imagenet). We complement the empirical study with the convergence rate analysis and the generalization bound of Sharp-MAML. To the best of our knowledge, this is the first empirical and theoretical study on sharpness-aware minimization in the context of bilevel learning.
APA
Abbas, M., Xiao, Q., Chen, L., Chen, P. & Chen, T.. (2022). Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10-32 Available from https://proceedings.mlr.press/v162/abbas22b.html.

Related Material