Pruning neural networks for inductive conformal prediction

Xindi Zhao, Anthony Bellotti
Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 179:273-293, 2022.

Abstract

Neural network pruning techniques are used to prune redundant parameters in overparameterized neural networks in order to compress the model size and reduce computational cost. The goal is to prune a neural network in such a way that it has the same, or nearly the same, predictive performance as the original. In this paper we study neural network pruning in the context of conformal prediction. In order to explore whether the neural network can be pruned while maintaining the predictive efficiency of conformal predictors, our work measures and compares the efficiency of the prediction sets provided by the inductive conformal predictor built with an underlying pruned neural network. We implement several existing pruning methods and propose a new pruning method based specifically on the conformal prediction framework. By evaluating with various neural network architectures and across several data sets, we find that the pruned network can maintain, or indeed improve, the efficiency of the conformal predictors up to a particular pruning ratio and this pruning ratio varies with different architectures and data sets. These results are instructive for deploying pruned neural network in real-work applications within the context of conformal predictors, where reliable predictions and reduced computational cost are relevant, e.g. in healthcare or safety-critical applications. This work is also relevant for further work applying continual learning techniques in the context of conformal predictors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v179-zhao22a, title = {Pruning neural networks for inductive conformal prediction}, author = {Zhao, Xindi and Bellotti, Anthony}, booktitle = {Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications}, pages = {273--293}, year = {2022}, editor = {Johansson, Ulf and Boström, Henrik and An Nguyen, Khuong and Luo, Zhiyuan and Carlsson, Lars}, volume = {179}, series = {Proceedings of Machine Learning Research}, month = {24--26 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v179/zhao22a/zhao22a.pdf}, url = {https://proceedings.mlr.press/v179/zhao22a.html}, abstract = {Neural network pruning techniques are used to prune redundant parameters in overparameterized neural networks in order to compress the model size and reduce computational cost. The goal is to prune a neural network in such a way that it has the same, or nearly the same, predictive performance as the original. In this paper we study neural network pruning in the context of conformal prediction. In order to explore whether the neural network can be pruned while maintaining the predictive efficiency of conformal predictors, our work measures and compares the efficiency of the prediction sets provided by the inductive conformal predictor built with an underlying pruned neural network. We implement several existing pruning methods and propose a new pruning method based specifically on the conformal prediction framework. By evaluating with various neural network architectures and across several data sets, we find that the pruned network can maintain, or indeed improve, the efficiency of the conformal predictors up to a particular pruning ratio and this pruning ratio varies with different architectures and data sets. These results are instructive for deploying pruned neural network in real-work applications within the context of conformal predictors, where reliable predictions and reduced computational cost are relevant, e.g. in healthcare or safety-critical applications. This work is also relevant for further work applying continual learning techniques in the context of conformal predictors. } }
Endnote
%0 Conference Paper %T Pruning neural networks for inductive conformal prediction %A Xindi Zhao %A Anthony Bellotti %B Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications %C Proceedings of Machine Learning Research %D 2022 %E Ulf Johansson %E Henrik Boström %E Khuong An Nguyen %E Zhiyuan Luo %E Lars Carlsson %F pmlr-v179-zhao22a %I PMLR %P 273--293 %U https://proceedings.mlr.press/v179/zhao22a.html %V 179 %X Neural network pruning techniques are used to prune redundant parameters in overparameterized neural networks in order to compress the model size and reduce computational cost. The goal is to prune a neural network in such a way that it has the same, or nearly the same, predictive performance as the original. In this paper we study neural network pruning in the context of conformal prediction. In order to explore whether the neural network can be pruned while maintaining the predictive efficiency of conformal predictors, our work measures and compares the efficiency of the prediction sets provided by the inductive conformal predictor built with an underlying pruned neural network. We implement several existing pruning methods and propose a new pruning method based specifically on the conformal prediction framework. By evaluating with various neural network architectures and across several data sets, we find that the pruned network can maintain, or indeed improve, the efficiency of the conformal predictors up to a particular pruning ratio and this pruning ratio varies with different architectures and data sets. These results are instructive for deploying pruned neural network in real-work applications within the context of conformal predictors, where reliable predictions and reduced computational cost are relevant, e.g. in healthcare or safety-critical applications. This work is also relevant for further work applying continual learning techniques in the context of conformal predictors.
APA
Zhao, X. & Bellotti, A.. (2022). Pruning neural networks for inductive conformal prediction. Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications, in Proceedings of Machine Learning Research 179:273-293 Available from https://proceedings.mlr.press/v179/zhao22a.html.

Related Material