A Progressive Explanation of Inference in ‘Hybrid’ Bayesian Networks for Supporting Clinical Decision Making

Evangelia Kyrimi, William Marsh
; Proceedings of the Eighth International Conference on Probabilistic Graphical Models, PMLR 52:275-286, 2016.

Abstract

Many Bayesian networks (BNs) have been developed as decision support tools. However, far fewer have been used in practice. Sometimes it is assumed that an accurate prediction is enough for useful decision support but this neglects the importance of trust: a user who does not trust a tool will not accept its advice. Giving users an explanation of the way a BN reasons may make its predictions easier to trust. In this study, we propose a progressive explanation of inference that can be applied to any hybrid BN. The key questions that we answer are: which important evidence supports or contradicts the prediction and through which intermediate variables does the evidence flow. The explanation is illustrated using different scenarios in a BN designed for medical decision support.

Cite this Paper


BibTeX
@InProceedings{pmlr-v52-kyrimi16, title = {A Progressive Explanation of Inference in `Hybrid' {B}ayesian Networks for Supporting Clinical Decision Making}, author = {Evangelia Kyrimi and William Marsh}, booktitle = {Proceedings of the Eighth International Conference on Probabilistic Graphical Models}, pages = {275--286}, year = {2016}, editor = {Alessandro Antonucci and Giorgio Corani and Cassio Polpo Campos}}, volume = {52}, series = {Proceedings of Machine Learning Research}, address = {Lugano, Switzerland}, month = {06--09 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v52/kyrimi16.pdf}, url = {http://proceedings.mlr.press/v52/kyrimi16.html}, abstract = {Many Bayesian networks (BNs) have been developed as decision support tools. However, far fewer have been used in practice. Sometimes it is assumed that an accurate prediction is enough for useful decision support but this neglects the importance of trust: a user who does not trust a tool will not accept its advice. Giving users an explanation of the way a BN reasons may make its predictions easier to trust. In this study, we propose a progressive explanation of inference that can be applied to any hybrid BN. The key questions that we answer are: which important evidence supports or contradicts the prediction and through which intermediate variables does the evidence flow. The explanation is illustrated using different scenarios in a BN designed for medical decision support.} }
Endnote
%0 Conference Paper %T A Progressive Explanation of Inference in ‘Hybrid’ Bayesian Networks for Supporting Clinical Decision Making %A Evangelia Kyrimi %A William Marsh %B Proceedings of the Eighth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2016 %E Alessandro Antonucci %E Giorgio Corani %E Cassio Polpo Campos} %F pmlr-v52-kyrimi16 %I PMLR %J Proceedings of Machine Learning Research %P 275--286 %U http://proceedings.mlr.press %V 52 %W PMLR %X Many Bayesian networks (BNs) have been developed as decision support tools. However, far fewer have been used in practice. Sometimes it is assumed that an accurate prediction is enough for useful decision support but this neglects the importance of trust: a user who does not trust a tool will not accept its advice. Giving users an explanation of the way a BN reasons may make its predictions easier to trust. In this study, we propose a progressive explanation of inference that can be applied to any hybrid BN. The key questions that we answer are: which important evidence supports or contradicts the prediction and through which intermediate variables does the evidence flow. The explanation is illustrated using different scenarios in a BN designed for medical decision support.
RIS
TY - CPAPER TI - A Progressive Explanation of Inference in ‘Hybrid’ Bayesian Networks for Supporting Clinical Decision Making AU - Evangelia Kyrimi AU - William Marsh BT - Proceedings of the Eighth International Conference on Probabilistic Graphical Models PY - 2016/08/15 DA - 2016/08/15 ED - Alessandro Antonucci ED - Giorgio Corani ED - Cassio Polpo Campos} ID - pmlr-v52-kyrimi16 PB - PMLR SP - 275 DP - PMLR EP - 286 L1 - http://proceedings.mlr.press/v52/kyrimi16.pdf UR - http://proceedings.mlr.press/v52/kyrimi16.html AB - Many Bayesian networks (BNs) have been developed as decision support tools. However, far fewer have been used in practice. Sometimes it is assumed that an accurate prediction is enough for useful decision support but this neglects the importance of trust: a user who does not trust a tool will not accept its advice. Giving users an explanation of the way a BN reasons may make its predictions easier to trust. In this study, we propose a progressive explanation of inference that can be applied to any hybrid BN. The key questions that we answer are: which important evidence supports or contradicts the prediction and through which intermediate variables does the evidence flow. The explanation is illustrated using different scenarios in a BN designed for medical decision support. ER -
APA
Kyrimi, E. & Marsh, W.. (2016). A Progressive Explanation of Inference in ‘Hybrid’ Bayesian Networks for Supporting Clinical Decision Making. Proceedings of the Eighth International Conference on Probabilistic Graphical Models, in PMLR 52:275-286

Related Material