Compressing Neural Networks with the Hashing Trick

Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, Yixin Chen
; Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2285-2294, 2015.

Abstract

As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-chenc15, title = {Compressing Neural Networks with the Hashing Trick}, author = {Wenlin Chen and James Wilson and Stephen Tyree and Kilian Weinberger and Yixin Chen}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2285--2294}, year = {2015}, editor = {Francis Bach and David Blei}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/chenc15.pdf}, url = {http://proceedings.mlr.press/v37/chenc15.html}, abstract = {As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.} }
Endnote
%0 Conference Paper %T Compressing Neural Networks with the Hashing Trick %A Wenlin Chen %A James Wilson %A Stephen Tyree %A Kilian Weinberger %A Yixin Chen %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-chenc15 %I PMLR %J Proceedings of Machine Learning Research %P 2285--2294 %U http://proceedings.mlr.press %V 37 %W PMLR %X As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
RIS
TY - CPAPER TI - Compressing Neural Networks with the Hashing Trick AU - Wenlin Chen AU - James Wilson AU - Stephen Tyree AU - Kilian Weinberger AU - Yixin Chen BT - Proceedings of the 32nd International Conference on Machine Learning PY - 2015/06/01 DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-chenc15 PB - PMLR SP - 2285 DP - PMLR EP - 2294 L1 - http://proceedings.mlr.press/v37/chenc15.pdf UR - http://proceedings.mlr.press/v37/chenc15.html AB - As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance. ER -
APA
Chen, W., Wilson, J., Tyree, S., Weinberger, K. & Chen, Y.. (2015). Compressing Neural Networks with the Hashing Trick. Proceedings of the 32nd International Conference on Machine Learning, in PMLR 37:2285-2294

Related Material