On the Episodic Difficulty of Few-shot Learning

Yunwei Bai, Zhenfeng He, Junfeng Hu
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:48-63, 2023.

Abstract

Dog vs. hot dog and dog vs. wolf, which one tends to be a harder comparison task? While simple, this question can be meaningful for few-shot classification. Few-shot learning enables trained models to recognize unseen classes through just a few labelled samples. As such, trained few-shot models usually have to possess the ability to assess the similarity degree between the unlabelled and labelled samples. In each few-shot learning episode, a combination of the labelled support set and unlabelled query set are sampled from the training dataset for model-training. In the episodic settings of few-shot learning, most algorithms draw the data samples uniformly at random for training. However, this approach disregards concepts of difficulty of each training episode, which may make a difference. After all, it is usually easier to differentiate between a dog and a hot dog, versus the dog and a wolf. Therefore, in this paper, we delve into the concept of episodic difficulty, or difficulty of each training episode, discovering several insights and proposing strategies to utilize the difficulty. Firstly, defining episodic difficulty as a training loss, we find and study the correlation between episodic difficulty and visual similarity among data samples in each episode. Secondly, we assess the respective usefulness of easy and difficult episodes for the training process. Lastly, based on the assessment, we design a curriculum for few-shot learning to support training with incremental difficulty. We observe that such an approach can achieve faster convergence for few-shot algorithms, reducing the average training time by around 50%. It can also make meta-learning algorithms achieve an increase in final testing accuracy scores.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-bai23a, title = {On the Episodic Difficulty of Few-shot Learning}, author = {Bai, Yunwei and He, Zhenfeng and Hu, Junfeng}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {48--63}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/bai23a/bai23a.pdf}, url = {https://proceedings.mlr.press/v189/bai23a.html}, abstract = {Dog vs. hot dog and dog vs. wolf, which one tends to be a harder comparison task? While simple, this question can be meaningful for few-shot classification. Few-shot learning enables trained models to recognize unseen classes through just a few labelled samples. As such, trained few-shot models usually have to possess the ability to assess the similarity degree between the unlabelled and labelled samples. In each few-shot learning episode, a combination of the labelled support set and unlabelled query set are sampled from the training dataset for model-training. In the episodic settings of few-shot learning, most algorithms draw the data samples uniformly at random for training. However, this approach disregards concepts of difficulty of each training episode, which may make a difference. After all, it is usually easier to differentiate between a dog and a hot dog, versus the dog and a wolf. Therefore, in this paper, we delve into the concept of episodic difficulty, or difficulty of each training episode, discovering several insights and proposing strategies to utilize the difficulty. Firstly, defining episodic difficulty as a training loss, we find and study the correlation between episodic difficulty and visual similarity among data samples in each episode. Secondly, we assess the respective usefulness of easy and difficult episodes for the training process. Lastly, based on the assessment, we design a curriculum for few-shot learning to support training with incremental difficulty. We observe that such an approach can achieve faster convergence for few-shot algorithms, reducing the average training time by around 50%. It can also make meta-learning algorithms achieve an increase in final testing accuracy scores.} }
Endnote
%0 Conference Paper %T On the Episodic Difficulty of Few-shot Learning %A Yunwei Bai %A Zhenfeng He %A Junfeng Hu %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-bai23a %I PMLR %P 48--63 %U https://proceedings.mlr.press/v189/bai23a.html %V 189 %X Dog vs. hot dog and dog vs. wolf, which one tends to be a harder comparison task? While simple, this question can be meaningful for few-shot classification. Few-shot learning enables trained models to recognize unseen classes through just a few labelled samples. As such, trained few-shot models usually have to possess the ability to assess the similarity degree between the unlabelled and labelled samples. In each few-shot learning episode, a combination of the labelled support set and unlabelled query set are sampled from the training dataset for model-training. In the episodic settings of few-shot learning, most algorithms draw the data samples uniformly at random for training. However, this approach disregards concepts of difficulty of each training episode, which may make a difference. After all, it is usually easier to differentiate between a dog and a hot dog, versus the dog and a wolf. Therefore, in this paper, we delve into the concept of episodic difficulty, or difficulty of each training episode, discovering several insights and proposing strategies to utilize the difficulty. Firstly, defining episodic difficulty as a training loss, we find and study the correlation between episodic difficulty and visual similarity among data samples in each episode. Secondly, we assess the respective usefulness of easy and difficult episodes for the training process. Lastly, based on the assessment, we design a curriculum for few-shot learning to support training with incremental difficulty. We observe that such an approach can achieve faster convergence for few-shot algorithms, reducing the average training time by around 50%. It can also make meta-learning algorithms achieve an increase in final testing accuracy scores.
APA
Bai, Y., He, Z. & Hu, J.. (2023). On the Episodic Difficulty of Few-shot Learning. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:48-63 Available from https://proceedings.mlr.press/v189/bai23a.html.

Related Material