NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:25438-25451, 2022.

Abstract

Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM, that does not rely on large-scale pretraining. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch. On eight classification datasets in four domains, TLM achieves results better than or similar to pretrained language models (e.g., RoBERTa-Large) while reducing the training FLOPs by two orders of magnitude. With high accuracy and efficiency, we hope TLM will contribute to democratizing NLP and expediting its development.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-yao22c, title = {{NLP} From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework}, author = {Yao, Xingcheng and Zheng, Yanan and Yang, Xiaocong and Yang, Zhilin}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {25438--25451}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/yao22c/yao22c.pdf}, url = {https://proceedings.mlr.press/v162/yao22c.html}, abstract = {Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM, that does not rely on large-scale pretraining. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch. On eight classification datasets in four domains, TLM achieves results better than or similar to pretrained language models (e.g., RoBERTa-Large) while reducing the training FLOPs by two orders of magnitude. With high accuracy and efficiency, we hope TLM will contribute to democratizing NLP and expediting its development.} }
Endnote
%0 Conference Paper %T NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework %A Xingcheng Yao %A Yanan Zheng %A Xiaocong Yang %A Zhilin Yang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-yao22c %I PMLR %P 25438--25451 %U https://proceedings.mlr.press/v162/yao22c.html %V 162 %X Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM, that does not rely on large-scale pretraining. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch. On eight classification datasets in four domains, TLM achieves results better than or similar to pretrained language models (e.g., RoBERTa-Large) while reducing the training FLOPs by two orders of magnitude. With high accuracy and efficiency, we hope TLM will contribute to democratizing NLP and expediting its development.
APA
Yao, X., Zheng, Y., Yang, X. & Yang, Z.. (2022). NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:25438-25451 Available from https://proceedings.mlr.press/v162/yao22c.html.

Related Material