Semi-Supervised Learning with Meta-Gradient

Taihong Xiao, Xin-Yu Zhang, Haolin Jia, Ming-Ming Cheng, Ming-Hsuan Yang
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:73-81, 2021.

Abstract

In this work, we propose a simple yet effective meta-learning algorithm in semi-supervised learning. We notice that most existing consistency-based approaches suffer from overfitting and limited model generalization ability, especially when training with only a small number of labeled data. To alleviate this issue, we propose a learn-to-generalize regularization term by utilizing the label information and optimize the problem in a meta-learning fashion. Specifically, we seek the pseudo labels of the unlabeled data so that the model can generalize well on the labeled data, which is formulated as a nested optimization problem. We address this problem using the meta-gradient that bridges between the pseudo label and the regularization term. In addition, we introduce a simple first-order approximation to avoid computing higher-order derivatives and provide theoretic convergence analysis. Extensive evaluations on the SVHN, CIFAR, and ImageNet datasets demonstrate that the proposed algorithm performs favorably against state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-xiao21a, title = { Semi-Supervised Learning with Meta-Gradient }, author = {Xiao, Taihong and Zhang, Xin-Yu and Jia, Haolin and Cheng, Ming-Ming and Yang, Ming-Hsuan}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {73--81}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/xiao21a/xiao21a.pdf}, url = {https://proceedings.mlr.press/v130/xiao21a.html}, abstract = { In this work, we propose a simple yet effective meta-learning algorithm in semi-supervised learning. We notice that most existing consistency-based approaches suffer from overfitting and limited model generalization ability, especially when training with only a small number of labeled data. To alleviate this issue, we propose a learn-to-generalize regularization term by utilizing the label information and optimize the problem in a meta-learning fashion. Specifically, we seek the pseudo labels of the unlabeled data so that the model can generalize well on the labeled data, which is formulated as a nested optimization problem. We address this problem using the meta-gradient that bridges between the pseudo label and the regularization term. In addition, we introduce a simple first-order approximation to avoid computing higher-order derivatives and provide theoretic convergence analysis. Extensive evaluations on the SVHN, CIFAR, and ImageNet datasets demonstrate that the proposed algorithm performs favorably against state-of-the-art methods. } }
Endnote
%0 Conference Paper %T Semi-Supervised Learning with Meta-Gradient %A Taihong Xiao %A Xin-Yu Zhang %A Haolin Jia %A Ming-Ming Cheng %A Ming-Hsuan Yang %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-xiao21a %I PMLR %P 73--81 %U https://proceedings.mlr.press/v130/xiao21a.html %V 130 %X In this work, we propose a simple yet effective meta-learning algorithm in semi-supervised learning. We notice that most existing consistency-based approaches suffer from overfitting and limited model generalization ability, especially when training with only a small number of labeled data. To alleviate this issue, we propose a learn-to-generalize regularization term by utilizing the label information and optimize the problem in a meta-learning fashion. Specifically, we seek the pseudo labels of the unlabeled data so that the model can generalize well on the labeled data, which is formulated as a nested optimization problem. We address this problem using the meta-gradient that bridges between the pseudo label and the regularization term. In addition, we introduce a simple first-order approximation to avoid computing higher-order derivatives and provide theoretic convergence analysis. Extensive evaluations on the SVHN, CIFAR, and ImageNet datasets demonstrate that the proposed algorithm performs favorably against state-of-the-art methods.
APA
Xiao, T., Zhang, X., Jia, H., Cheng, M. & Yang, M.. (2021). Semi-Supervised Learning with Meta-Gradient . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:73-81 Available from https://proceedings.mlr.press/v130/xiao21a.html.

Related Material