Data Scaling Laws in NMT: The Effect of Noise and Architecture

Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Colin Cherry, Behnam Neyshabur, Orhan Firat
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1466-1482, 2022.

Abstract

In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT). First, we establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dependence on the model size. Then, we systematically vary aspects of the training setup to understand how they impact the data scaling laws. In particular, we change the following (1) Architecture and task setup: We compare to a transformer-LSTM hybrid, and a decoder-only transformer with a language modeling loss (2) Noise level in the training distribution: We experiment with filtering, and adding iid synthetic noise. In all the above cases, we find that the data scaling exponents are minimally impacted, suggesting that marginally worse architectures or training data can be compensated for by adding more data. Lastly, we find that using back-translated data instead of parallel data, can significantly degrade the scaling exponent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-bansal22b, title = {Data Scaling Laws in {NMT}: The Effect of Noise and Architecture}, author = {Bansal, Yamini and Ghorbani, Behrooz and Garg, Ankush and Zhang, Biao and Cherry, Colin and Neyshabur, Behnam and Firat, Orhan}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1466--1482}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/bansal22b/bansal22b.pdf}, url = {https://proceedings.mlr.press/v162/bansal22b.html}, abstract = {In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT). First, we establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dependence on the model size. Then, we systematically vary aspects of the training setup to understand how they impact the data scaling laws. In particular, we change the following (1) Architecture and task setup: We compare to a transformer-LSTM hybrid, and a decoder-only transformer with a language modeling loss (2) Noise level in the training distribution: We experiment with filtering, and adding iid synthetic noise. In all the above cases, we find that the data scaling exponents are minimally impacted, suggesting that marginally worse architectures or training data can be compensated for by adding more data. Lastly, we find that using back-translated data instead of parallel data, can significantly degrade the scaling exponent.} }
Endnote
%0 Conference Paper %T Data Scaling Laws in NMT: The Effect of Noise and Architecture %A Yamini Bansal %A Behrooz Ghorbani %A Ankush Garg %A Biao Zhang %A Colin Cherry %A Behnam Neyshabur %A Orhan Firat %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-bansal22b %I PMLR %P 1466--1482 %U https://proceedings.mlr.press/v162/bansal22b.html %V 162 %X In this work, we study the effect of varying the architecture and training data quality on the data scaling properties of Neural Machine Translation (NMT). First, we establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dependence on the model size. Then, we systematically vary aspects of the training setup to understand how they impact the data scaling laws. In particular, we change the following (1) Architecture and task setup: We compare to a transformer-LSTM hybrid, and a decoder-only transformer with a language modeling loss (2) Noise level in the training distribution: We experiment with filtering, and adding iid synthetic noise. In all the above cases, we find that the data scaling exponents are minimally impacted, suggesting that marginally worse architectures or training data can be compensated for by adding more data. Lastly, we find that using back-translated data instead of parallel data, can significantly degrade the scaling exponent.
APA
Bansal, Y., Ghorbani, B., Garg, A., Zhang, B., Cherry, C., Neyshabur, B. & Firat, O.. (2022). Data Scaling Laws in NMT: The Effect of Noise and Architecture. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1466-1482 Available from https://proceedings.mlr.press/v162/bansal22b.html.

Related Material