Compressing Neural Networks with the Hashing Trick

Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, Yixin Chen
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2285-2294, 2015.

Abstract

As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-chenc15, title = {Compressing Neural Networks with the Hashing Trick}, author = {Chen, Wenlin and Wilson, James and Tyree, Stephen and Weinberger, Kilian and Chen, Yixin}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2285--2294}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/chenc15.pdf}, url = {https://proceedings.mlr.press/v37/chenc15.html}, abstract = {As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.} }
Endnote
%0 Conference Paper %T Compressing Neural Networks with the Hashing Trick %A Wenlin Chen %A James Wilson %A Stephen Tyree %A Kilian Weinberger %A Yixin Chen %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-chenc15 %I PMLR %P 2285--2294 %U https://proceedings.mlr.press/v37/chenc15.html %V 37 %X As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
RIS
TY - CPAPER TI - Compressing Neural Networks with the Hashing Trick AU - Wenlin Chen AU - James Wilson AU - Stephen Tyree AU - Kilian Weinberger AU - Yixin Chen BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-chenc15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 2285 EP - 2294 L1 - http://proceedings.mlr.press/v37/chenc15.pdf UR - https://proceedings.mlr.press/v37/chenc15.html AB - As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance. ER -
APA
Chen, W., Wilson, J., Tyree, S., Weinberger, K. & Chen, Y.. (2015). Compressing Neural Networks with the Hashing Trick. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2285-2294 Available from https://proceedings.mlr.press/v37/chenc15.html.

Related Material