POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging

Shishir G. Patil, Paras Jain, Prabal Dutta, Ion Stoica, Joseph Gonzalez
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:17573-17583, 2022.

Abstract

Fine-tuning models on edge devices like mobile phones would enable privacy-preserving personalization over sensitive data. However, edge training has historically been limited to relatively small models with simple architectures because training is both memory and energy intensive. We present POET, an algorithm to enable training large neural networks on memory-scarce battery-operated edge devices. POET jointly optimizes the integrated search search spaces of rematerialization and paging, two algorithms to reduce the memory consumption of backpropagation. Given a memory budget and a run-time constraint, we formulate a mixed-integer linear program (MILP) for energy-optimal training. Our approach enables training significantly larger models on embedded devices while reducing energy consumption while not modifying mathematical correctness of backpropagation. We demonstrate that it is possible to fine-tune both ResNet-18 and BERT within the memory constraints of a Cortex-M class embedded device while outperforming current edge training methods in energy efficiency. POET is an open-source project available at https://github.com/ShishirPatil/poet

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-patil22b, title = {{POET}: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging}, author = {Patil, Shishir G. and Jain, Paras and Dutta, Prabal and Stoica, Ion and Gonzalez, Joseph}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {17573--17583}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/patil22b/patil22b.pdf}, url = {https://proceedings.mlr.press/v162/patil22b.html}, abstract = {Fine-tuning models on edge devices like mobile phones would enable privacy-preserving personalization over sensitive data. However, edge training has historically been limited to relatively small models with simple architectures because training is both memory and energy intensive. We present POET, an algorithm to enable training large neural networks on memory-scarce battery-operated edge devices. POET jointly optimizes the integrated search search spaces of rematerialization and paging, two algorithms to reduce the memory consumption of backpropagation. Given a memory budget and a run-time constraint, we formulate a mixed-integer linear program (MILP) for energy-optimal training. Our approach enables training significantly larger models on embedded devices while reducing energy consumption while not modifying mathematical correctness of backpropagation. We demonstrate that it is possible to fine-tune both ResNet-18 and BERT within the memory constraints of a Cortex-M class embedded device while outperforming current edge training methods in energy efficiency. POET is an open-source project available at https://github.com/ShishirPatil/poet} }
Endnote
%0 Conference Paper %T POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging %A Shishir G. Patil %A Paras Jain %A Prabal Dutta %A Ion Stoica %A Joseph Gonzalez %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-patil22b %I PMLR %P 17573--17583 %U https://proceedings.mlr.press/v162/patil22b.html %V 162 %X Fine-tuning models on edge devices like mobile phones would enable privacy-preserving personalization over sensitive data. However, edge training has historically been limited to relatively small models with simple architectures because training is both memory and energy intensive. We present POET, an algorithm to enable training large neural networks on memory-scarce battery-operated edge devices. POET jointly optimizes the integrated search search spaces of rematerialization and paging, two algorithms to reduce the memory consumption of backpropagation. Given a memory budget and a run-time constraint, we formulate a mixed-integer linear program (MILP) for energy-optimal training. Our approach enables training significantly larger models on embedded devices while reducing energy consumption while not modifying mathematical correctness of backpropagation. We demonstrate that it is possible to fine-tune both ResNet-18 and BERT within the memory constraints of a Cortex-M class embedded device while outperforming current edge training methods in energy efficiency. POET is an open-source project available at https://github.com/ShishirPatil/poet
APA
Patil, S.G., Jain, P., Dutta, P., Stoica, I. & Gonzalez, J.. (2022). POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:17573-17583 Available from https://proceedings.mlr.press/v162/patil22b.html.

Related Material