On the Generalization Effects of Linear Transformations in Data Augmentation

Sen Wu, Hongyang Zhang, Gregory Valiant, Christopher Re
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10410-10420, 2020.

Abstract

Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks. Yet, there is little rigorous understanding of why and how various augmentations work. In this work, we consider a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting. First, we show that transformations which preserve the labels of the data can improve estimation by enlarging the span of the training data. Second, we show that transformations which mix data can improve estimation by playing a regularization effect. Finally, we validate our theoretical insights on MNIST. Based on the insights, we propose an augmentation scheme that searches over the space of transformations by how \emph{uncertain} the model is about the transformed data. We validate our proposed scheme on image and text datasets. For example, our method outperforms RandAugment by 1.24% on CIFAR-100 using Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA Adversarial AutoAugment on CIFAR datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wu20g, title = {On the Generalization Effects of Linear Transformations in Data Augmentation}, author = {Wu, Sen and Zhang, Hongyang and Valiant, Gregory and Re, Christopher}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10410--10420}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wu20g/wu20g.pdf}, url = {https://proceedings.mlr.press/v119/wu20g.html}, abstract = {Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks. Yet, there is little rigorous understanding of why and how various augmentations work. In this work, we consider a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting. First, we show that transformations which preserve the labels of the data can improve estimation by enlarging the span of the training data. Second, we show that transformations which mix data can improve estimation by playing a regularization effect. Finally, we validate our theoretical insights on MNIST. Based on the insights, we propose an augmentation scheme that searches over the space of transformations by how \emph{uncertain} the model is about the transformed data. We validate our proposed scheme on image and text datasets. For example, our method outperforms RandAugment by 1.24% on CIFAR-100 using Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA Adversarial AutoAugment on CIFAR datasets.} }
Endnote
%0 Conference Paper %T On the Generalization Effects of Linear Transformations in Data Augmentation %A Sen Wu %A Hongyang Zhang %A Gregory Valiant %A Christopher Re %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wu20g %I PMLR %P 10410--10420 %U https://proceedings.mlr.press/v119/wu20g.html %V 119 %X Data augmentation is a powerful technique to improve performance in applications such as image and text classification tasks. Yet, there is little rigorous understanding of why and how various augmentations work. In this work, we consider a family of linear transformations and study their effects on the ridge estimator in an over-parametrized linear regression setting. First, we show that transformations which preserve the labels of the data can improve estimation by enlarging the span of the training data. Second, we show that transformations which mix data can improve estimation by playing a regularization effect. Finally, we validate our theoretical insights on MNIST. Based on the insights, we propose an augmentation scheme that searches over the space of transformations by how \emph{uncertain} the model is about the transformed data. We validate our proposed scheme on image and text datasets. For example, our method outperforms RandAugment by 1.24% on CIFAR-100 using Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA Adversarial AutoAugment on CIFAR datasets.
APA
Wu, S., Zhang, H., Valiant, G. & Re, C.. (2020). On the Generalization Effects of Linear Transformations in Data Augmentation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10410-10420 Available from https://proceedings.mlr.press/v119/wu20g.html.

Related Material