Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions

Rafal Kocielnik, Sara Kangaslahti, Shrimai Prabhumoye, Meena Hari, Michael Alvarez, Anima Anandkumar
Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop, PMLR 203:22-32, 2023.

Abstract

Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v203-kocielnik23a, title = {Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions}, author = {Kocielnik, Rafal and Kangaslahti, Sara and Prabhumoye, Shrimai and Hari, Meena and Alvarez, Michael and Anandkumar, Anima}, booktitle = {Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop}, pages = {22--32}, year = {2023}, editor = {Albalak, Alon and Zhou, Chunting and Raffel, Colin and Ramachandran, Deepak and Ruder, Sebastian and Ma, Xuezhe}, volume = {203}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v203/kocielnik23a/kocielnik23a.pdf}, url = {https://proceedings.mlr.press/v203/kocielnik23a.html}, abstract = {Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task. } }
Endnote
%0 Conference Paper %T Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions %A Rafal Kocielnik %A Sara Kangaslahti %A Shrimai Prabhumoye %A Meena Hari %A Michael Alvarez %A Anima Anandkumar %B Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop %C Proceedings of Machine Learning Research %D 2023 %E Alon Albalak %E Chunting Zhou %E Colin Raffel %E Deepak Ramachandran %E Sebastian Ruder %E Xuezhe Ma %F pmlr-v203-kocielnik23a %I PMLR %P 22--32 %U https://proceedings.mlr.press/v203/kocielnik23a.html %V 203 %X Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.
APA
Kocielnik, R., Kangaslahti, S., Prabhumoye, S., Hari, M., Alvarez, M. & Anandkumar, A.. (2023). Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions. Proceedings of The 1st Transfer Learning for Natural Language Processing Workshop, in Proceedings of Machine Learning Research 203:22-32 Available from https://proceedings.mlr.press/v203/kocielnik23a.html.

Related Material