Pruning via Sparsity-indexed ODE: a Continuous Sparsity Viewpoint

Zhanfeng Mo, Haosen Shi, Sinno Jialin Pan
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:25018-25036, 2023.

Abstract

Neural pruning, which involves identifying the optimal sparse subnetwork, is a key technique for reducing the complexity and improving the efficiency of deep neural networks. To address the challenge of solving neural pruning at a specific sparsity level directly, we investigate the evolution of optimal subnetworks with continuously increasing sparsity, which can provide insight into how to transform an unpruned dense model into an optimal subnetwork with any desired level of sparsity. In this paper, we proposed a novel pruning framework, coined Sparsity-indexed ODE (SpODE) that provides explicit guidance on how to best preserve model performance while ensuring an infinitesimal increase in model sparsity. On top of this, we develop a pruning algorithm, termed Pruning via Sparsity-indexed ODE (PSO), that enables effective pruning via traveling along the SpODE path. Empirical experiments show that PSO achieves either better or comparable performance compared to state-of-the-art baselines across various pruning settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mo23c, title = {Pruning via Sparsity-indexed {ODE}: a Continuous Sparsity Viewpoint}, author = {Mo, Zhanfeng and Shi, Haosen and Pan, Sinno Jialin}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {25018--25036}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mo23c/mo23c.pdf}, url = {https://proceedings.mlr.press/v202/mo23c.html}, abstract = {Neural pruning, which involves identifying the optimal sparse subnetwork, is a key technique for reducing the complexity and improving the efficiency of deep neural networks. To address the challenge of solving neural pruning at a specific sparsity level directly, we investigate the evolution of optimal subnetworks with continuously increasing sparsity, which can provide insight into how to transform an unpruned dense model into an optimal subnetwork with any desired level of sparsity. In this paper, we proposed a novel pruning framework, coined Sparsity-indexed ODE (SpODE) that provides explicit guidance on how to best preserve model performance while ensuring an infinitesimal increase in model sparsity. On top of this, we develop a pruning algorithm, termed Pruning via Sparsity-indexed ODE (PSO), that enables effective pruning via traveling along the SpODE path. Empirical experiments show that PSO achieves either better or comparable performance compared to state-of-the-art baselines across various pruning settings.} }
Endnote
%0 Conference Paper %T Pruning via Sparsity-indexed ODE: a Continuous Sparsity Viewpoint %A Zhanfeng Mo %A Haosen Shi %A Sinno Jialin Pan %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mo23c %I PMLR %P 25018--25036 %U https://proceedings.mlr.press/v202/mo23c.html %V 202 %X Neural pruning, which involves identifying the optimal sparse subnetwork, is a key technique for reducing the complexity and improving the efficiency of deep neural networks. To address the challenge of solving neural pruning at a specific sparsity level directly, we investigate the evolution of optimal subnetworks with continuously increasing sparsity, which can provide insight into how to transform an unpruned dense model into an optimal subnetwork with any desired level of sparsity. In this paper, we proposed a novel pruning framework, coined Sparsity-indexed ODE (SpODE) that provides explicit guidance on how to best preserve model performance while ensuring an infinitesimal increase in model sparsity. On top of this, we develop a pruning algorithm, termed Pruning via Sparsity-indexed ODE (PSO), that enables effective pruning via traveling along the SpODE path. Empirical experiments show that PSO achieves either better or comparable performance compared to state-of-the-art baselines across various pruning settings.
APA
Mo, Z., Shi, H. & Pan, S.J.. (2023). Pruning via Sparsity-indexed ODE: a Continuous Sparsity Viewpoint. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:25018-25036 Available from https://proceedings.mlr.press/v202/mo23c.html.

Related Material