LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning

Huaiyu Li, Weiming Dong, Xing Mei, Chongyang Ma, Feiyue Huang, Bao-Gang Hu
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3825-3834, 2019.

Abstract

In this work, we propose a novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseen tasks with training samples. Our approach, called LGM-Net, includes two key modules, namely, TargetNet and MetaNet. The TargetNet module is a neural network for solving a specific task and the MetaNet module aims at learning to generate functional weights for TargetNet by observing training samples. We also present an intertask normalization strategy for the training process to leverage common information shared across different tasks. The experimental results on Omniglot and miniImageNet datasets demonstrate that LGM-Net can effectively adapt to similar unseen tasks and achieve competitive performance, and the results on synthetic datasets show that transferable prior knowledge is learned by the MetaNet module via mapping training data to functional weights. LGM-Net enables fast learning and adaptation since no further tuning steps are required compared to other meta-learning approaches

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-li19c, title = {{LGM}-Net: Learning to Generate Matching Networks for Few-Shot Learning}, author = {Li, Huaiyu and Dong, Weiming and Mei, Xing and Ma, Chongyang and Huang, Feiyue and Hu, Bao-Gang}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3825--3834}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/li19c/li19c.pdf}, url = {https://proceedings.mlr.press/v97/li19c.html}, abstract = {In this work, we propose a novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseen tasks with training samples. Our approach, called LGM-Net, includes two key modules, namely, TargetNet and MetaNet. The TargetNet module is a neural network for solving a specific task and the MetaNet module aims at learning to generate functional weights for TargetNet by observing training samples. We also present an intertask normalization strategy for the training process to leverage common information shared across different tasks. The experimental results on Omniglot and miniImageNet datasets demonstrate that LGM-Net can effectively adapt to similar unseen tasks and achieve competitive performance, and the results on synthetic datasets show that transferable prior knowledge is learned by the MetaNet module via mapping training data to functional weights. LGM-Net enables fast learning and adaptation since no further tuning steps are required compared to other meta-learning approaches} }
Endnote
%0 Conference Paper %T LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning %A Huaiyu Li %A Weiming Dong %A Xing Mei %A Chongyang Ma %A Feiyue Huang %A Bao-Gang Hu %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-li19c %I PMLR %P 3825--3834 %U https://proceedings.mlr.press/v97/li19c.html %V 97 %X In this work, we propose a novel meta-learning approach for few-shot classification, which learns transferable prior knowledge across tasks and directly produces network parameters for similar unseen tasks with training samples. Our approach, called LGM-Net, includes two key modules, namely, TargetNet and MetaNet. The TargetNet module is a neural network for solving a specific task and the MetaNet module aims at learning to generate functional weights for TargetNet by observing training samples. We also present an intertask normalization strategy for the training process to leverage common information shared across different tasks. The experimental results on Omniglot and miniImageNet datasets demonstrate that LGM-Net can effectively adapt to similar unseen tasks and achieve competitive performance, and the results on synthetic datasets show that transferable prior knowledge is learned by the MetaNet module via mapping training data to functional weights. LGM-Net enables fast learning and adaptation since no further tuning steps are required compared to other meta-learning approaches
APA
Li, H., Dong, W., Mei, X., Ma, C., Huang, F. & Hu, B.. (2019). LGM-Net: Learning to Generate Matching Networks for Few-Shot Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3825-3834 Available from https://proceedings.mlr.press/v97/li19c.html.

Related Material