Sparsifying Networks via Subdifferential Inclusion

Sagar Verma, Jean-Christophe Pesquet
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10542-10552, 2021.

Abstract

Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a pre-trained neural network. By leveraging the properties of standard nonlinear activation functions, we show that the problem is equivalent to an approximate subdifferential inclusion problem. The accuracy of the approximation controls the sparsity. We show that the proposed approach is valid for a broad class of activation functions (ReLU, sigmoid, softmax). We propose an iterative optimization algorithm to induce sparsity whose convergence is guaranteed. Because of the algorithm flexibility, the sparsity can be ensured from partial training data in a minibatch manner. To demonstrate the effectiveness of our method, we perform experiments on various networks in different applicative contexts: image classification, speech recognition, natural language processing, and time-series forecasting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-verma21b, title = {Sparsifying Networks via Subdifferential Inclusion}, author = {Verma, Sagar and Pesquet, Jean-Christophe}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10542--10552}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/verma21b/verma21b.pdf}, url = {https://proceedings.mlr.press/v139/verma21b.html}, abstract = {Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a pre-trained neural network. By leveraging the properties of standard nonlinear activation functions, we show that the problem is equivalent to an approximate subdifferential inclusion problem. The accuracy of the approximation controls the sparsity. We show that the proposed approach is valid for a broad class of activation functions (ReLU, sigmoid, softmax). We propose an iterative optimization algorithm to induce sparsity whose convergence is guaranteed. Because of the algorithm flexibility, the sparsity can be ensured from partial training data in a minibatch manner. To demonstrate the effectiveness of our method, we perform experiments on various networks in different applicative contexts: image classification, speech recognition, natural language processing, and time-series forecasting.} }
Endnote
%0 Conference Paper %T Sparsifying Networks via Subdifferential Inclusion %A Sagar Verma %A Jean-Christophe Pesquet %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-verma21b %I PMLR %P 10542--10552 %U https://proceedings.mlr.press/v139/verma21b.html %V 139 %X Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a pre-trained neural network. By leveraging the properties of standard nonlinear activation functions, we show that the problem is equivalent to an approximate subdifferential inclusion problem. The accuracy of the approximation controls the sparsity. We show that the proposed approach is valid for a broad class of activation functions (ReLU, sigmoid, softmax). We propose an iterative optimization algorithm to induce sparsity whose convergence is guaranteed. Because of the algorithm flexibility, the sparsity can be ensured from partial training data in a minibatch manner. To demonstrate the effectiveness of our method, we perform experiments on various networks in different applicative contexts: image classification, speech recognition, natural language processing, and time-series forecasting.
APA
Verma, S. & Pesquet, J.. (2021). Sparsifying Networks via Subdifferential Inclusion. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10542-10552 Available from https://proceedings.mlr.press/v139/verma21b.html.

Related Material