Weightless: Lossy weight encoding for deep neural network compression

Brandon Reagan, Udit Gupta, Bob Adolf, Michael Mitzenmacher, Alexander Rush, Gu-Yeon Wei, David Brooks
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4324-4333, 2018.

Abstract

The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496x without loss of model accuracy. This results in up to a 1.51x improvement over the state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-reagan18a, title = {Weightless: Lossy weight encoding for deep neural network compression}, author = {Reagan, Brandon and Gupta, Udit and Adolf, Bob and Mitzenmacher, Michael and Rush, Alexander and Wei, Gu-Yeon and Brooks, David}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4324--4333}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/reagan18a/reagan18a.pdf}, url = {https://proceedings.mlr.press/v80/reagan18a.html}, abstract = {The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496x without loss of model accuracy. This results in up to a 1.51x improvement over the state-of-the-art.} }
Endnote
%0 Conference Paper %T Weightless: Lossy weight encoding for deep neural network compression %A Brandon Reagan %A Udit Gupta %A Bob Adolf %A Michael Mitzenmacher %A Alexander Rush %A Gu-Yeon Wei %A David Brooks %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-reagan18a %I PMLR %P 4324--4333 %U https://proceedings.mlr.press/v80/reagan18a.html %V 80 %X The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496x without loss of model accuracy. This results in up to a 1.51x improvement over the state-of-the-art.
APA
Reagan, B., Gupta, U., Adolf, B., Mitzenmacher, M., Rush, A., Wei, G. & Brooks, D.. (2018). Weightless: Lossy weight encoding for deep neural network compression. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4324-4333 Available from https://proceedings.mlr.press/v80/reagan18a.html.

Related Material