Semi-Modular Inference: enhanced learning in multi-modular models by tempering the influence of components

Christian Carmona, Geoff Nicholls
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:4226-4235, 2020.

Abstract

Bayesian statistical inference loses predictive optimality when generative models are misspecified.Working within an existing coherent loss-based generalisation of Bayesian inference, we show existing Modular/Cut-model inference is coherent, and write down a new family of Semi-Modular Inference (SMI) schemes, indexed by an influence parameter, with Bayesian inference and Cut-models as special cases. We give a meta-learning criterion and estimation procedure to choose the inference scheme. This returns Bayesian inference when there is no misspecification.The framework applies naturally to Multi-modular models. Cut-model inference allows directed information flow from well-specified modules to misspecified modules, but not vice versa. An existing alternative power posterior method gives tunable but undirected control of information flow, improving prediction in some settings. In contrast, SMI allows \emph{tunable and directed} information flow between modules.We illustrate our methods on two standard test cases from the literature and a motivating archaeological data set.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-carmona20a, title = {Semi-Modular Inference: enhanced learning in multi-modular models by tempering the influence of components}, author = {Carmona, Christian and Nicholls, Geoff}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {4226--4235}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/carmona20a/carmona20a.pdf}, url = {https://proceedings.mlr.press/v108/carmona20a.html}, abstract = {Bayesian statistical inference loses predictive optimality when generative models are misspecified.Working within an existing coherent loss-based generalisation of Bayesian inference, we show existing Modular/Cut-model inference is coherent, and write down a new family of Semi-Modular Inference (SMI) schemes, indexed by an influence parameter, with Bayesian inference and Cut-models as special cases. We give a meta-learning criterion and estimation procedure to choose the inference scheme. This returns Bayesian inference when there is no misspecification.The framework applies naturally to Multi-modular models. Cut-model inference allows directed information flow from well-specified modules to misspecified modules, but not vice versa. An existing alternative power posterior method gives tunable but undirected control of information flow, improving prediction in some settings. In contrast, SMI allows \emph{tunable and directed} information flow between modules.We illustrate our methods on two standard test cases from the literature and a motivating archaeological data set.} }
Endnote
%0 Conference Paper %T Semi-Modular Inference: enhanced learning in multi-modular models by tempering the influence of components %A Christian Carmona %A Geoff Nicholls %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-carmona20a %I PMLR %P 4226--4235 %U https://proceedings.mlr.press/v108/carmona20a.html %V 108 %X Bayesian statistical inference loses predictive optimality when generative models are misspecified.Working within an existing coherent loss-based generalisation of Bayesian inference, we show existing Modular/Cut-model inference is coherent, and write down a new family of Semi-Modular Inference (SMI) schemes, indexed by an influence parameter, with Bayesian inference and Cut-models as special cases. We give a meta-learning criterion and estimation procedure to choose the inference scheme. This returns Bayesian inference when there is no misspecification.The framework applies naturally to Multi-modular models. Cut-model inference allows directed information flow from well-specified modules to misspecified modules, but not vice versa. An existing alternative power posterior method gives tunable but undirected control of information flow, improving prediction in some settings. In contrast, SMI allows \emph{tunable and directed} information flow between modules.We illustrate our methods on two standard test cases from the literature and a motivating archaeological data set.
APA
Carmona, C. & Nicholls, G.. (2020). Semi-Modular Inference: enhanced learning in multi-modular models by tempering the influence of components. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:4226-4235 Available from https://proceedings.mlr.press/v108/carmona20a.html.

Related Material