Memory Efficient Online Meta Learning

Durmus Alp Emre Acar, Ruizhao Zhu, Venkatesh Saligrama
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:32-42, 2021.

Abstract

We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arising in online meta-learning is the scalability of memory as more tasks are viewed over time. Heretofore, prior works have allowed for perfect recall leading to linear increase in memory with time. Different from prior works, in our method, prior task instances are allowed to be deleted. We propose to leverage prior task instances by means of a fixed-size state-vector, which is updated sequentially. Our theoretical analysis demonstrates that our proposed memory efficient online learning (MOML) method suffers sub-linear regret with convex loss functions and sub-linear local regret for nonconvex losses. On benchmark datasets we show that our method can outperform prior works even though they allow for perfect recall.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-acar21b, title = {Memory Efficient Online Meta Learning}, author = {Acar, Durmus Alp Emre and Zhu, Ruizhao and Saligrama, Venkatesh}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {32--42}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/acar21b/acar21b.pdf}, url = {https://proceedings.mlr.press/v139/acar21b.html}, abstract = {We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arising in online meta-learning is the scalability of memory as more tasks are viewed over time. Heretofore, prior works have allowed for perfect recall leading to linear increase in memory with time. Different from prior works, in our method, prior task instances are allowed to be deleted. We propose to leverage prior task instances by means of a fixed-size state-vector, which is updated sequentially. Our theoretical analysis demonstrates that our proposed memory efficient online learning (MOML) method suffers sub-linear regret with convex loss functions and sub-linear local regret for nonconvex losses. On benchmark datasets we show that our method can outperform prior works even though they allow for perfect recall.} }
Endnote
%0 Conference Paper %T Memory Efficient Online Meta Learning %A Durmus Alp Emre Acar %A Ruizhao Zhu %A Venkatesh Saligrama %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-acar21b %I PMLR %P 32--42 %U https://proceedings.mlr.press/v139/acar21b.html %V 139 %X We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. A fundamental concern arising in online meta-learning is the scalability of memory as more tasks are viewed over time. Heretofore, prior works have allowed for perfect recall leading to linear increase in memory with time. Different from prior works, in our method, prior task instances are allowed to be deleted. We propose to leverage prior task instances by means of a fixed-size state-vector, which is updated sequentially. Our theoretical analysis demonstrates that our proposed memory efficient online learning (MOML) method suffers sub-linear regret with convex loss functions and sub-linear local regret for nonconvex losses. On benchmark datasets we show that our method can outperform prior works even though they allow for perfect recall.
APA
Acar, D.A.E., Zhu, R. & Saligrama, V.. (2021). Memory Efficient Online Meta Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:32-42 Available from https://proceedings.mlr.press/v139/acar21b.html.

Related Material