Laplacian Regularized Few-Shot Learning

Imtiaz Ziko, Jose Dolz, Eric Granger, Ismail Ben Ayed
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11660-11670, 2020.

Abstract

We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assigning query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label assignments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes independent (parallel) updates for each query sample, while guaranteeing convergence. Following a simple cross-entropy training on the base classes, and without complex meta-learning strategies, we conducted comprehensive experiments over five few-shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ziko20a, title = {{L}aplacian Regularized Few-Shot Learning}, author = {Ziko, Imtiaz and Dolz, Jose and Granger, Eric and Ayed, Ismail Ben}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11660--11670}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ziko20a/ziko20a.pdf}, url = {https://proceedings.mlr.press/v119/ziko20a.html}, abstract = {We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assigning query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label assignments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes independent (parallel) updates for each query sample, while guaranteeing convergence. Following a simple cross-entropy training on the base classes, and without complex meta-learning strategies, we conducted comprehensive experiments over five few-shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.} }
Endnote
%0 Conference Paper %T Laplacian Regularized Few-Shot Learning %A Imtiaz Ziko %A Jose Dolz %A Eric Granger %A Ismail Ben Ayed %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ziko20a %I PMLR %P 11660--11670 %U https://proceedings.mlr.press/v119/ziko20a.html %V 119 %X We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assigning query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label assignments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes independent (parallel) updates for each query sample, while guaranteeing convergence. Following a simple cross-entropy training on the base classes, and without complex meta-learning strategies, we conducted comprehensive experiments over five few-shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.
APA
Ziko, I., Dolz, J., Granger, E. & Ayed, I.B.. (2020). Laplacian Regularized Few-Shot Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11660-11670 Available from https://proceedings.mlr.press/v119/ziko20a.html.

Related Material