XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning

Sung Whan Yoon, Do-Yeon Kim, Jun Seo, Jaekyun Moon
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10852-10860, 2020.

Abstract

Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine learning. The challenge gets greater when a novel task is given with only a few labeled examples, a problem known as incremental few-shot learning. We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning. The method utilizes a backbone network pretrained on a set of base categories while also employing additional modules that are meta-trained across episodes. Given a new task, the novel feature extracted from the meta-trained modules is mixed with the base feature obtained from the pretrained model. The process of combining two different features provides TAR and is also controlled by meta-trained modules. The TAR contains effective information for classifying both novel and base categories. The base and novel classifiers quickly adapt to a given task by utilizing the TAR. Experiments on standard image datasets indicate that XtarNet achieves state-of-the-art incremental few-shot learning performance. The concept of TAR can also be used in conjunction with existing incremental few-shot learning methods; extensive simulation results in fact show that applying TAR enhances the known methods significantly.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-yoon20b, title = {{X}tar{N}et: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning}, author = {Yoon, Sung Whan and Kim, Do-Yeon and Seo, Jun and Moon, Jaekyun}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10852--10860}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/yoon20b/yoon20b.pdf}, url = {https://proceedings.mlr.press/v119/yoon20b.html}, abstract = {Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine learning. The challenge gets greater when a novel task is given with only a few labeled examples, a problem known as incremental few-shot learning. We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning. The method utilizes a backbone network pretrained on a set of base categories while also employing additional modules that are meta-trained across episodes. Given a new task, the novel feature extracted from the meta-trained modules is mixed with the base feature obtained from the pretrained model. The process of combining two different features provides TAR and is also controlled by meta-trained modules. The TAR contains effective information for classifying both novel and base categories. The base and novel classifiers quickly adapt to a given task by utilizing the TAR. Experiments on standard image datasets indicate that XtarNet achieves state-of-the-art incremental few-shot learning performance. The concept of TAR can also be used in conjunction with existing incremental few-shot learning methods; extensive simulation results in fact show that applying TAR enhances the known methods significantly.} }
Endnote
%0 Conference Paper %T XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning %A Sung Whan Yoon %A Do-Yeon Kim %A Jun Seo %A Jaekyun Moon %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-yoon20b %I PMLR %P 10852--10860 %U https://proceedings.mlr.press/v119/yoon20b.html %V 119 %X Learning novel concepts while preserving prior knowledge is a long-standing challenge in machine learning. The challenge gets greater when a novel task is given with only a few labeled examples, a problem known as incremental few-shot learning. We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning. The method utilizes a backbone network pretrained on a set of base categories while also employing additional modules that are meta-trained across episodes. Given a new task, the novel feature extracted from the meta-trained modules is mixed with the base feature obtained from the pretrained model. The process of combining two different features provides TAR and is also controlled by meta-trained modules. The TAR contains effective information for classifying both novel and base categories. The base and novel classifiers quickly adapt to a given task by utilizing the TAR. Experiments on standard image datasets indicate that XtarNet achieves state-of-the-art incremental few-shot learning performance. The concept of TAR can also be used in conjunction with existing incremental few-shot learning methods; extensive simulation results in fact show that applying TAR enhances the known methods significantly.
APA
Yoon, S.W., Kim, D., Seo, J. & Moon, J.. (2020). XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10852-10860 Available from https://proceedings.mlr.press/v119/yoon20b.html.

Related Material