[edit]
Pruning neural networks for inductive conformal prediction
Proceedings of the Eleventh Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 179:273-293, 2022.
Abstract
Neural network pruning techniques are used to prune redundant parameters in overparameterized neural networks in order to compress the model size and reduce computational cost. The goal is to prune a neural network in such a way that it has the same, or nearly the same, predictive performance as the original. In this paper we study neural network pruning in the context of conformal prediction. In order to explore whether the neural network can be pruned while maintaining the predictive efficiency of conformal predictors, our work measures and compares the efficiency of the prediction sets provided by the inductive conformal predictor built with an underlying pruned neural network. We implement several existing pruning methods and propose a new pruning method based specifically on the conformal prediction framework. By evaluating with various neural network architectures and across several data sets, we find that the pruned network can maintain, or indeed improve, the efficiency of the conformal predictors up to a particular pruning ratio and this pruning ratio varies with different architectures and data sets. These results are instructive for deploying pruned neural network in real-work applications within the context of conformal predictors, where reliable predictions and reduced computational cost are relevant, e.g. in healthcare or safety-critical applications. This work is also relevant for further work applying continual learning techniques in the context of conformal predictors.