TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Sung Whan Yoon, Jun Seo, Jaekyun Moon
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7115-7123, 2019.

Abstract

Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-yoon19a, title = {{T}ap{N}et: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning}, author = {Yoon, Sung Whan and Seo, Jun and Moon, Jaekyun}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7115--7123}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/yoon19a/yoon19a.pdf}, url = {https://proceedings.mlr.press/v97/yoon19a.html}, abstract = {Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.} }
Endnote
%0 Conference Paper %T TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning %A Sung Whan Yoon %A Jun Seo %A Jaekyun Moon %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-yoon19a %I PMLR %P 7115--7123 %U https://proceedings.mlr.press/v97/yoon19a.html %V 97 %X Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.
APA
Yoon, S.W., Seo, J. & Moon, J.. (2019). TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7115-7123 Available from https://proceedings.mlr.press/v97/yoon19a.html.

Related Material