DropNet: Reducing Neural Network Complexity via Iterative Pruning

Chong Min John Tan, Mehul Motani
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9356-9366, 2020.

Abstract

Modern deep neural networks require a significant amount of computing time and power to train and deploy, which limits their usage on edge devices. Inspired by the iterative weight pruning in the Lottery Ticket Hypothesis, we propose DropNet, an iterative pruning method which prunes nodes/filters to reduce network complexity. DropNet iteratively removes nodes/filters with the lowest average post-activation value across all training samples. Empirically, we show that DropNet is robust across a wide range of scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny ImageNet datasets. We show that up to 90% of the nodes/filters can be removed without any significant loss of accuracy. The final pruned network performs well even with reinitialisation of the weights and biases. DropNet also achieves similar accuracy to an oracle which greedily removes nodes/filters one at a time to minimise training loss, highlighting its effectiveness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-tan20a, title = {{D}rop{N}et: Reducing Neural Network Complexity via Iterative Pruning}, author = {Tan, Chong Min John and Motani, Mehul}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9356--9366}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/tan20a/tan20a.pdf}, url = {https://proceedings.mlr.press/v119/tan20a.html}, abstract = {Modern deep neural networks require a significant amount of computing time and power to train and deploy, which limits their usage on edge devices. Inspired by the iterative weight pruning in the Lottery Ticket Hypothesis, we propose DropNet, an iterative pruning method which prunes nodes/filters to reduce network complexity. DropNet iteratively removes nodes/filters with the lowest average post-activation value across all training samples. Empirically, we show that DropNet is robust across a wide range of scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny ImageNet datasets. We show that up to 90% of the nodes/filters can be removed without any significant loss of accuracy. The final pruned network performs well even with reinitialisation of the weights and biases. DropNet also achieves similar accuracy to an oracle which greedily removes nodes/filters one at a time to minimise training loss, highlighting its effectiveness.} }
Endnote
%0 Conference Paper %T DropNet: Reducing Neural Network Complexity via Iterative Pruning %A Chong Min John Tan %A Mehul Motani %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-tan20a %I PMLR %P 9356--9366 %U https://proceedings.mlr.press/v119/tan20a.html %V 119 %X Modern deep neural networks require a significant amount of computing time and power to train and deploy, which limits their usage on edge devices. Inspired by the iterative weight pruning in the Lottery Ticket Hypothesis, we propose DropNet, an iterative pruning method which prunes nodes/filters to reduce network complexity. DropNet iteratively removes nodes/filters with the lowest average post-activation value across all training samples. Empirically, we show that DropNet is robust across a wide range of scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny ImageNet datasets. We show that up to 90% of the nodes/filters can be removed without any significant loss of accuracy. The final pruned network performs well even with reinitialisation of the weights and biases. DropNet also achieves similar accuracy to an oracle which greedily removes nodes/filters one at a time to minimise training loss, highlighting its effectiveness.
APA
Tan, C.M.J. & Motani, M.. (2020). DropNet: Reducing Neural Network Complexity via Iterative Pruning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9356-9366 Available from https://proceedings.mlr.press/v119/tan20a.html.

Related Material