On Anytime Learning at Macroscale

Lucas Caccia, Jing Xu, Myle Ott, Marcaurelio Ranzato, Ludovic Denoyer
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:165-182, 2022.

Abstract

In many practical applications of machine learning data arrives sequentially over time in large chunks. Practitioners have then to decide how to allocate their computational budget in order to obtain the best performance at any point in time. Online learning theory for convex optimization suggests that the best strategy is to use data as soon as it arrives. However, this might not be the best strategy when using deep non-linear networks, particularly when these perform multiple passes over each chunk of data rendering the overall distribution non i.i.d.. In this paper, we formalize this learning setting in the simplest scenario in which each data chunk is drawn from the same underlying distribution, and make a first attempt at empirically answering the following questions: How long should the learner wait before training on the newly arrived chunks? What architecture should the learner adopt? Should the learner increase capacity over time as more data is observed? We probe this learning setting using convolutional neural networks trained on classic computer vision benchmarks as well as a large transformer model trained on a large-scale language modeling task. Code is available in the supplementary material.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-caccia22a, title = {On Anytime Learning at Macroscale}, author = {Caccia, Lucas and Xu, Jing and Ott, Myle and Ranzato, Marcaurelio and Denoyer, Ludovic}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {165--182}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/caccia22a/caccia22a.pdf}, url = {https://proceedings.mlr.press/v199/caccia22a.html}, abstract = {In many practical applications of machine learning data arrives sequentially over time in large chunks. Practitioners have then to decide how to allocate their computational budget in order to obtain the best performance at any point in time. Online learning theory for convex optimization suggests that the best strategy is to use data as soon as it arrives. However, this might not be the best strategy when using deep non-linear networks, particularly when these perform multiple passes over each chunk of data rendering the overall distribution non i.i.d.. In this paper, we formalize this learning setting in the simplest scenario in which each data chunk is drawn from the same underlying distribution, and make a first attempt at empirically answering the following questions: How long should the learner wait before training on the newly arrived chunks? What architecture should the learner adopt? Should the learner increase capacity over time as more data is observed? We probe this learning setting using convolutional neural networks trained on classic computer vision benchmarks as well as a large transformer model trained on a large-scale language modeling task. Code is available in the supplementary material.} }
Endnote
%0 Conference Paper %T On Anytime Learning at Macroscale %A Lucas Caccia %A Jing Xu %A Myle Ott %A Marcaurelio Ranzato %A Ludovic Denoyer %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-caccia22a %I PMLR %P 165--182 %U https://proceedings.mlr.press/v199/caccia22a.html %V 199 %X In many practical applications of machine learning data arrives sequentially over time in large chunks. Practitioners have then to decide how to allocate their computational budget in order to obtain the best performance at any point in time. Online learning theory for convex optimization suggests that the best strategy is to use data as soon as it arrives. However, this might not be the best strategy when using deep non-linear networks, particularly when these perform multiple passes over each chunk of data rendering the overall distribution non i.i.d.. In this paper, we formalize this learning setting in the simplest scenario in which each data chunk is drawn from the same underlying distribution, and make a first attempt at empirically answering the following questions: How long should the learner wait before training on the newly arrived chunks? What architecture should the learner adopt? Should the learner increase capacity over time as more data is observed? We probe this learning setting using convolutional neural networks trained on classic computer vision benchmarks as well as a large transformer model trained on a large-scale language modeling task. Code is available in the supplementary material.
APA
Caccia, L., Xu, J., Ott, M., Ranzato, M. & Denoyer, L.. (2022). On Anytime Learning at Macroscale. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:165-182 Available from https://proceedings.mlr.press/v199/caccia22a.html.

Related Material