You Only Derive Once (YODO): Automatic Differentiation for Efficient Sensitivity Analysis in Bayesian Networks

Rafael Ballester-Ripoll, Manuele Leonelli
Proceedings of The 11th International Conference on Probabilistic Graphical Models, PMLR 186:169-180, 2022.

Abstract

Sensitivity analysis measures the influence of a Bayesian network’s parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In particular, the so-called sensitivity value measures the quantity of interest’s partial derivative with respect to the network’s conditional probabilities. However, finding such values in large networks with thousands of parameters can become computationally very expensive. We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass. Our method first marginalizes the whole network once using e.g. variable elimination and then backpropagates this operation to obtain the gradient with respect to all input parameters. We demonstrate our routines by ranking all parameters by importance on a Bayesian network modeling humanitarian crises and disasters, and then show the method’s efficiency by scaling it to huge networks with up to 100’000 parameters. An implementation of the methods using the popular machine learning library PyTorch is freely available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v186-ballester-ripoll22a, title = {You Only Derive Once ({YODO}): Automatic Differentiation for Efficient Sensitivity Analysis in {B}ayesian Networks}, author = {Ballester-Ripoll, Rafael and Leonelli, Manuele}, booktitle = {Proceedings of The 11th International Conference on Probabilistic Graphical Models}, pages = {169--180}, year = {2022}, editor = {Salmerón, Antonio and Rumı́, Rafael}, volume = {186}, series = {Proceedings of Machine Learning Research}, month = {05--07 Oct}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v186/ballester-ripoll22a/ballester-ripoll22a.pdf}, url = {https://proceedings.mlr.press/v186/ballester-ripoll22a.html}, abstract = {Sensitivity analysis measures the influence of a Bayesian network’s parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In particular, the so-called sensitivity value measures the quantity of interest’s partial derivative with respect to the network’s conditional probabilities. However, finding such values in large networks with thousands of parameters can become computationally very expensive. We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass. Our method first marginalizes the whole network once using e.g. variable elimination and then backpropagates this operation to obtain the gradient with respect to all input parameters. We demonstrate our routines by ranking all parameters by importance on a Bayesian network modeling humanitarian crises and disasters, and then show the method’s efficiency by scaling it to huge networks with up to 100’000 parameters. An implementation of the methods using the popular machine learning library PyTorch is freely available.} }
Endnote
%0 Conference Paper %T You Only Derive Once (YODO): Automatic Differentiation for Efficient Sensitivity Analysis in Bayesian Networks %A Rafael Ballester-Ripoll %A Manuele Leonelli %B Proceedings of The 11th International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2022 %E Antonio Salmerón %E Rafael Rumı́ %F pmlr-v186-ballester-ripoll22a %I PMLR %P 169--180 %U https://proceedings.mlr.press/v186/ballester-ripoll22a.html %V 186 %X Sensitivity analysis measures the influence of a Bayesian network’s parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In particular, the so-called sensitivity value measures the quantity of interest’s partial derivative with respect to the network’s conditional probabilities. However, finding such values in large networks with thousands of parameters can become computationally very expensive. We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass. Our method first marginalizes the whole network once using e.g. variable elimination and then backpropagates this operation to obtain the gradient with respect to all input parameters. We demonstrate our routines by ranking all parameters by importance on a Bayesian network modeling humanitarian crises and disasters, and then show the method’s efficiency by scaling it to huge networks with up to 100’000 parameters. An implementation of the methods using the popular machine learning library PyTorch is freely available.
APA
Ballester-Ripoll, R. & Leonelli, M.. (2022). You Only Derive Once (YODO): Automatic Differentiation for Efficient Sensitivity Analysis in Bayesian Networks. Proceedings of The 11th International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 186:169-180 Available from https://proceedings.mlr.press/v186/ballester-ripoll22a.html.

Related Material