HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning

Andrey Zhmoginov, Mark Sandler, Maksym Vladymyrov
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:27075-27098, 2022.

Abstract

In this work we propose a HyperTransformer, a Transformer-based model for supervised and semi-supervised few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity Transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhmoginov22a, title = {{H}yper{T}ransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning}, author = {Zhmoginov, Andrey and Sandler, Mark and Vladymyrov, Maksym}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {27075--27098}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhmoginov22a/zhmoginov22a.pdf}, url = {https://proceedings.mlr.press/v162/zhmoginov22a.html}, abstract = {In this work we propose a HyperTransformer, a Transformer-based model for supervised and semi-supervised few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity Transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.} }
Endnote
%0 Conference Paper %T HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning %A Andrey Zhmoginov %A Mark Sandler %A Maksym Vladymyrov %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhmoginov22a %I PMLR %P 27075--27098 %U https://proceedings.mlr.press/v162/zhmoginov22a.html %V 162 %X In this work we propose a HyperTransformer, a Transformer-based model for supervised and semi-supervised few-shot learning that generates weights of a convolutional neural network (CNN) directly from support samples. Since the dependence of a small generated CNN model on a specific task is encoded by a high-capacity Transformer model, we effectively decouple the complexity of the large task space from the complexity of individual tasks. Our method is particularly effective for small target CNN architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters. For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.
APA
Zhmoginov, A., Sandler, M. & Vladymyrov, M.. (2022). HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:27075-27098 Available from https://proceedings.mlr.press/v162/zhmoginov22a.html.

Related Material