Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning

Konstantinos Kalais, Sotirios Chatzis
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10586-10597, 2022.

Abstract

This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-takes-all (LWTA) activations. This type of network units results in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output. The main operating principle of the introduced units rely on stochastic principles, as the network performs posterior sampling over competing units to select the winner. Therefore, the proposed networks are explicitly designed to extract input data representations of sparse stochastic nature, as opposed to the currently standard deterministic representation paradigm. Our approach produces state-of-the-art predictive accuracy on few-shot image classification and regression experiments, as well as reduced predictive error on an active learning setting; these improvements come with an immensely reduced computational cost. Code is available at: https://github.com/Kkalais/StochLWTA-ML

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kalais22a, title = {Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning}, author = {Kalais, Konstantinos and Chatzis, Sotirios}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10586--10597}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kalais22a/kalais22a.pdf}, url = {https://proceedings.mlr.press/v162/kalais22a.html}, abstract = {This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-takes-all (LWTA) activations. This type of network units results in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output. The main operating principle of the introduced units rely on stochastic principles, as the network performs posterior sampling over competing units to select the winner. Therefore, the proposed networks are explicitly designed to extract input data representations of sparse stochastic nature, as opposed to the currently standard deterministic representation paradigm. Our approach produces state-of-the-art predictive accuracy on few-shot image classification and regression experiments, as well as reduced predictive error on an active learning setting; these improvements come with an immensely reduced computational cost. Code is available at: https://github.com/Kkalais/StochLWTA-ML} }
Endnote
%0 Conference Paper %T Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning %A Konstantinos Kalais %A Sotirios Chatzis %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kalais22a %I PMLR %P 10586--10597 %U https://proceedings.mlr.press/v162/kalais22a.html %V 162 %X This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-takes-all (LWTA) activations. This type of network units results in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output. The main operating principle of the introduced units rely on stochastic principles, as the network performs posterior sampling over competing units to select the winner. Therefore, the proposed networks are explicitly designed to extract input data representations of sparse stochastic nature, as opposed to the currently standard deterministic representation paradigm. Our approach produces state-of-the-art predictive accuracy on few-shot image classification and regression experiments, as well as reduced predictive error on an active learning setting; these improvements come with an immensely reduced computational cost. Code is available at: https://github.com/Kkalais/StochLWTA-ML
APA
Kalais, K. & Chatzis, S.. (2022). Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10586-10597 Available from https://proceedings.mlr.press/v162/kalais22a.html.

Related Material