Learning Curves for Analysis of Deep Networks

Derek Hoiem, Tanmay Gupta, Zhizhong Li, Michal Shlapentokh-Rothman
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4287-4296, 2021.

Abstract

Learning curves model a classifier’s test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to evaluate design choices, such as pretraining, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. Our experiments exemplify use of learning curves for analysis and yield several interesting observations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-hoiem21a, title = {Learning Curves for Analysis of Deep Networks}, author = {Hoiem, Derek and Gupta, Tanmay and Li, Zhizhong and Shlapentokh-Rothman, Michal}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4287--4296}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/hoiem21a/hoiem21a.pdf}, url = {https://proceedings.mlr.press/v139/hoiem21a.html}, abstract = {Learning curves model a classifier’s test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to evaluate design choices, such as pretraining, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. Our experiments exemplify use of learning curves for analysis and yield several interesting observations.} }
Endnote
%0 Conference Paper %T Learning Curves for Analysis of Deep Networks %A Derek Hoiem %A Tanmay Gupta %A Zhizhong Li %A Michal Shlapentokh-Rothman %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-hoiem21a %I PMLR %P 4287--4296 %U https://proceedings.mlr.press/v139/hoiem21a.html %V 139 %X Learning curves model a classifier’s test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to evaluate design choices, such as pretraining, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. Our experiments exemplify use of learning curves for analysis and yield several interesting observations.
APA
Hoiem, D., Gupta, T., Li, Z. & Shlapentokh-Rothman, M.. (2021). Learning Curves for Analysis of Deep Networks. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4287-4296 Available from https://proceedings.mlr.press/v139/hoiem21a.html.

Related Material