Online Learned Continual Compression with Adaptive Quantization Modules

Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Joelle Pineau
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1240-1250, 2020.

Abstract

We introduce and study the problem of Online Continual Compression, where one attempts to simultaneously learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. A naive application of auto-encoder in this setting encounters a major challenge: representations derived from earlier encoder states must be usable by later decoder states. We show how to use discrete auto-encoders to effectively address this challenge and introduce Adaptive Quantization Modules (AQM) to control variation in the compression ability of the module at any given stage of learning. This enables selecting an appropriate compression for incoming samples, while taking into account overall memory constraints and current progress of the learned compression. Unlike previous methods, our approach does not require any pretraining, even on challenging datasets. We show that using AQM to replace standard episodic memory in continual learning settings leads to significant gains on continual learning benchmarks with images, LiDAR, and reinforcement learning agents.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-caccia20a, title = {Online Learned Continual Compression with Adaptive Quantization Modules}, author = {Caccia, Lucas and Belilovsky, Eugene and Caccia, Massimo and Pineau, Joelle}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1240--1250}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/caccia20a/caccia20a.pdf}, url = {https://proceedings.mlr.press/v119/caccia20a.html}, abstract = {We introduce and study the problem of Online Continual Compression, where one attempts to simultaneously learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. A naive application of auto-encoder in this setting encounters a major challenge: representations derived from earlier encoder states must be usable by later decoder states. We show how to use discrete auto-encoders to effectively address this challenge and introduce Adaptive Quantization Modules (AQM) to control variation in the compression ability of the module at any given stage of learning. This enables selecting an appropriate compression for incoming samples, while taking into account overall memory constraints and current progress of the learned compression. Unlike previous methods, our approach does not require any pretraining, even on challenging datasets. We show that using AQM to replace standard episodic memory in continual learning settings leads to significant gains on continual learning benchmarks with images, LiDAR, and reinforcement learning agents.} }
Endnote
%0 Conference Paper %T Online Learned Continual Compression with Adaptive Quantization Modules %A Lucas Caccia %A Eugene Belilovsky %A Massimo Caccia %A Joelle Pineau %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-caccia20a %I PMLR %P 1240--1250 %U https://proceedings.mlr.press/v119/caccia20a.html %V 119 %X We introduce and study the problem of Online Continual Compression, where one attempts to simultaneously learn to compress and store a representative dataset from a non i.i.d data stream, while only observing each sample once. A naive application of auto-encoder in this setting encounters a major challenge: representations derived from earlier encoder states must be usable by later decoder states. We show how to use discrete auto-encoders to effectively address this challenge and introduce Adaptive Quantization Modules (AQM) to control variation in the compression ability of the module at any given stage of learning. This enables selecting an appropriate compression for incoming samples, while taking into account overall memory constraints and current progress of the learned compression. Unlike previous methods, our approach does not require any pretraining, even on challenging datasets. We show that using AQM to replace standard episodic memory in continual learning settings leads to significant gains on continual learning benchmarks with images, LiDAR, and reinforcement learning agents.
APA
Caccia, L., Belilovsky, E., Caccia, M. & Pineau, J.. (2020). Online Learned Continual Compression with Adaptive Quantization Modules. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1240-1250 Available from https://proceedings.mlr.press/v119/caccia20a.html.

Related Material