Dirichlet Pruning for Convolutional Neural Networks

Kamil Adamczewski, Mijung Park
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3637-3645, 2021.

Abstract

We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning which assigns the Dirichlet distribution over each layer’s channels in convolutional layers (or neurons in fully-connected layers), and learns the parameters of the distribution over these units using variational inference. The learnt parameters allow us to informatively and intuitively remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. This method yields low GPU footprint, as the number of parameters is linear in the number of channels (or neurons) and training requires as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and WideResNet (94% and 72% compression rate, respectively) where our method achieves the state-of-the-art compression performance and provides interpretable features as a by-product.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-adamczewski21a, title = { Dirichlet Pruning for Convolutional Neural Networks }, author = {Adamczewski, Kamil and Park, Mijung}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3637--3645}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/adamczewski21a/adamczewski21a.pdf}, url = {https://proceedings.mlr.press/v130/adamczewski21a.html}, abstract = { We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning which assigns the Dirichlet distribution over each layer’s channels in convolutional layers (or neurons in fully-connected layers), and learns the parameters of the distribution over these units using variational inference. The learnt parameters allow us to informatively and intuitively remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. This method yields low GPU footprint, as the number of parameters is linear in the number of channels (or neurons) and training requires as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and WideResNet (94% and 72% compression rate, respectively) where our method achieves the state-of-the-art compression performance and provides interpretable features as a by-product. } }
Endnote
%0 Conference Paper %T Dirichlet Pruning for Convolutional Neural Networks %A Kamil Adamczewski %A Mijung Park %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-adamczewski21a %I PMLR %P 3637--3645 %U https://proceedings.mlr.press/v130/adamczewski21a.html %V 130 %X We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning which assigns the Dirichlet distribution over each layer’s channels in convolutional layers (or neurons in fully-connected layers), and learns the parameters of the distribution over these units using variational inference. The learnt parameters allow us to informatively and intuitively remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. This method yields low GPU footprint, as the number of parameters is linear in the number of channels (or neurons) and training requires as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and WideResNet (94% and 72% compression rate, respectively) where our method achieves the state-of-the-art compression performance and provides interpretable features as a by-product.
APA
Adamczewski, K. & Park, M.. (2021). Dirichlet Pruning for Convolutional Neural Networks . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3637-3645 Available from https://proceedings.mlr.press/v130/adamczewski21a.html.

Related Material