Co-training Improves Prompt-based Learning for Large Language Models

Hunter Lang, Monica N Agrawal, Yoon Kim, David Sontag
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11985-12003, 2022.

Abstract

We demonstrate that co-training (Blum & Mitchell, 1998) can improve the performance of prompt-based learning by using unlabeled data. While prompting has emerged as a promising paradigm for few-shot and zero-shot learning, it is often brittle and requires much larger models compared to the standard supervised setup. We find that co-training makes it possible to improve the original prompt model and at the same time learn a smaller, downstream task-specific model. In the case where we only have partial access to a prompt model (e.g., output probabilities from GPT-3 (Brown et al., 2020)) we learn a calibration model over the prompt outputs. When we have full access to the prompt model’s gradients but full finetuning remains prohibitively expensive (e.g., T0 (Sanh et al., 2021)), we learn a set of soft prompt continuous vectors to iteratively update the prompt model. We find that models trained in this manner can significantly improve performance on challenging datasets where there is currently a large gap between prompt-based learning and fully-supervised models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-lang22a, title = {Co-training Improves Prompt-based Learning for Large Language Models}, author = {Lang, Hunter and Agrawal, Monica N and Kim, Yoon and Sontag, David}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11985--12003}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/lang22a/lang22a.pdf}, url = {https://proceedings.mlr.press/v162/lang22a.html}, abstract = {We demonstrate that co-training (Blum & Mitchell, 1998) can improve the performance of prompt-based learning by using unlabeled data. While prompting has emerged as a promising paradigm for few-shot and zero-shot learning, it is often brittle and requires much larger models compared to the standard supervised setup. We find that co-training makes it possible to improve the original prompt model and at the same time learn a smaller, downstream task-specific model. In the case where we only have partial access to a prompt model (e.g., output probabilities from GPT-3 (Brown et al., 2020)) we learn a calibration model over the prompt outputs. When we have full access to the prompt model’s gradients but full finetuning remains prohibitively expensive (e.g., T0 (Sanh et al., 2021)), we learn a set of soft prompt continuous vectors to iteratively update the prompt model. We find that models trained in this manner can significantly improve performance on challenging datasets where there is currently a large gap between prompt-based learning and fully-supervised models.} }
Endnote
%0 Conference Paper %T Co-training Improves Prompt-based Learning for Large Language Models %A Hunter Lang %A Monica N Agrawal %A Yoon Kim %A David Sontag %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-lang22a %I PMLR %P 11985--12003 %U https://proceedings.mlr.press/v162/lang22a.html %V 162 %X We demonstrate that co-training (Blum & Mitchell, 1998) can improve the performance of prompt-based learning by using unlabeled data. While prompting has emerged as a promising paradigm for few-shot and zero-shot learning, it is often brittle and requires much larger models compared to the standard supervised setup. We find that co-training makes it possible to improve the original prompt model and at the same time learn a smaller, downstream task-specific model. In the case where we only have partial access to a prompt model (e.g., output probabilities from GPT-3 (Brown et al., 2020)) we learn a calibration model over the prompt outputs. When we have full access to the prompt model’s gradients but full finetuning remains prohibitively expensive (e.g., T0 (Sanh et al., 2021)), we learn a set of soft prompt continuous vectors to iteratively update the prompt model. We find that models trained in this manner can significantly improve performance on challenging datasets where there is currently a large gap between prompt-based learning and fully-supervised models.
APA
Lang, H., Agrawal, M.N., Kim, Y. & Sontag, D.. (2022). Co-training Improves Prompt-based Learning for Large Language Models. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11985-12003 Available from https://proceedings.mlr.press/v162/lang22a.html.

Related Material