Warping Layer: Representation Learning for Label Structures in Weakly Supervised Learning

Yingyi Ma, Xinhua Zhang
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7286-7299, 2022.

Abstract

Many learning tasks only receive weak supervision, such as semi-supervised learning and few-shot learning. With limited labeled data, prior structures become especially important, and prominent examples include hierarchies and mutual exclusions in the class space. However, most existing approaches only learn the representations separately in the feature space and the label space, and do not explicitly enforce the logical relationships. In this paper, we propose a novel warping layer that jointly learns representations in both spaces, and thanks to the modularity and differentiability, it can be directly embedded into generative models to leverage the prior hierarchical structure and unlabeled data. The effectiveness of the warping layer is demonstrated on both few-shot and semi-supervised learning, outperforming the state of the art in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-ma22a, title = { Warping Layer: Representation Learning for Label Structures in Weakly Supervised Learning }, author = {Ma, Yingyi and Zhang, Xinhua}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7286--7299}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/ma22a/ma22a.pdf}, url = {https://proceedings.mlr.press/v151/ma22a.html}, abstract = { Many learning tasks only receive weak supervision, such as semi-supervised learning and few-shot learning. With limited labeled data, prior structures become especially important, and prominent examples include hierarchies and mutual exclusions in the class space. However, most existing approaches only learn the representations separately in the feature space and the label space, and do not explicitly enforce the logical relationships. In this paper, we propose a novel warping layer that jointly learns representations in both spaces, and thanks to the modularity and differentiability, it can be directly embedded into generative models to leverage the prior hierarchical structure and unlabeled data. The effectiveness of the warping layer is demonstrated on both few-shot and semi-supervised learning, outperforming the state of the art in practice. } }
Endnote
%0 Conference Paper %T Warping Layer: Representation Learning for Label Structures in Weakly Supervised Learning %A Yingyi Ma %A Xinhua Zhang %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-ma22a %I PMLR %P 7286--7299 %U https://proceedings.mlr.press/v151/ma22a.html %V 151 %X Many learning tasks only receive weak supervision, such as semi-supervised learning and few-shot learning. With limited labeled data, prior structures become especially important, and prominent examples include hierarchies and mutual exclusions in the class space. However, most existing approaches only learn the representations separately in the feature space and the label space, and do not explicitly enforce the logical relationships. In this paper, we propose a novel warping layer that jointly learns representations in both spaces, and thanks to the modularity and differentiability, it can be directly embedded into generative models to leverage the prior hierarchical structure and unlabeled data. The effectiveness of the warping layer is demonstrated on both few-shot and semi-supervised learning, outperforming the state of the art in practice.
APA
Ma, Y. & Zhang, X.. (2022). Warping Layer: Representation Learning for Label Structures in Weakly Supervised Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7286-7299 Available from https://proceedings.mlr.press/v151/ma22a.html.

Related Material