Meta Variance Transfer: Learning to Augment from the Others

Seong-Jin Park, Seungju Han, Ji-Won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, Sung Ju Hwang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7510-7520, 2020.

Abstract

Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformations, background noises, and changes in lighting conditions. However, training deep learning models generally require huge amount of data instances under diverse variations, to ensure its robustness. To alleviate the need of collecting large amount of data and better learn to generalize with scarce data instances, we propose a novel meta-learning method which learns to transfer factors of variations from one class to another, such that it can improve the classification performance on unseen examples. Transferred variations generate virtual samples that augment the feature space of the target class during training, simulating upcoming query samples with similar variations. By sharing the factors of variations across different classes, the model becomes more robust to variations in the unseen examples and tasks using small number of examples per class. We validate our model on multiple benchmark datasets for few-shot classification and face recognition, on which our model significantly improves the performance of the base model, outperforming relevant baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-park20b, title = {Meta Variance Transfer: Learning to Augment from the Others}, author = {Park, Seong-Jin and Han, Seungju and Baek, Ji-Won and Kim, Insoo and Song, Juhwan and Lee, Hae Beom and Han, Jae-Joon and Hwang, Sung Ju}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7510--7520}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/park20b/park20b.pdf}, url = {https://proceedings.mlr.press/v119/park20b.html}, abstract = {Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformations, background noises, and changes in lighting conditions. However, training deep learning models generally require huge amount of data instances under diverse variations, to ensure its robustness. To alleviate the need of collecting large amount of data and better learn to generalize with scarce data instances, we propose a novel meta-learning method which learns to transfer factors of variations from one class to another, such that it can improve the classification performance on unseen examples. Transferred variations generate virtual samples that augment the feature space of the target class during training, simulating upcoming query samples with similar variations. By sharing the factors of variations across different classes, the model becomes more robust to variations in the unseen examples and tasks using small number of examples per class. We validate our model on multiple benchmark datasets for few-shot classification and face recognition, on which our model significantly improves the performance of the base model, outperforming relevant baselines.} }
Endnote
%0 Conference Paper %T Meta Variance Transfer: Learning to Augment from the Others %A Seong-Jin Park %A Seungju Han %A Ji-Won Baek %A Insoo Kim %A Juhwan Song %A Hae Beom Lee %A Jae-Joon Han %A Sung Ju Hwang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-park20b %I PMLR %P 7510--7520 %U https://proceedings.mlr.press/v119/park20b.html %V 119 %X Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformations, background noises, and changes in lighting conditions. However, training deep learning models generally require huge amount of data instances under diverse variations, to ensure its robustness. To alleviate the need of collecting large amount of data and better learn to generalize with scarce data instances, we propose a novel meta-learning method which learns to transfer factors of variations from one class to another, such that it can improve the classification performance on unseen examples. Transferred variations generate virtual samples that augment the feature space of the target class during training, simulating upcoming query samples with similar variations. By sharing the factors of variations across different classes, the model becomes more robust to variations in the unseen examples and tasks using small number of examples per class. We validate our model on multiple benchmark datasets for few-shot classification and face recognition, on which our model significantly improves the performance of the base model, outperforming relevant baselines.
APA
Park, S., Han, S., Baek, J., Kim, I., Song, J., Lee, H.B., Han, J. & Hwang, S.J.. (2020). Meta Variance Transfer: Learning to Augment from the Others. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7510-7520 Available from https://proceedings.mlr.press/v119/park20b.html.

Related Material