Parameterized hardness of active inference

Nils Donselaar
Proceedings of the Ninth International Conference on Probabilistic Graphical Models, PMLR 72:109-120, 2018.

Abstract

Within the field of cognitive neuroscience, predictive processing is an increasingly popular unifying account of cognitive capacities including action and perception which posits that these rely on probabilistic generative models to predict sensory input or the consequences of one’s behaviour. In the corresponding literature one frequently encounters naive claims about the computational resources required to successfully employ such models, while actual complexity analyses are often lacking. In this paper we study the problem of selecting a course of action which yields the greatest reduction in prediction error between the intended outcome and the current state, known in this area as \textit{active inference}. Modelling the problem in terms of Bayesian networks and the relative entropy (Kullback-Leibler divergence) between a target and an induced distribution, we derive parameterized (in)tractability results extending the $\mathsf{NP}^{\mathsf{PP}}$-hardness classification found in Kwisthout 2014. These show that contrary to common belief, the size of the prediction error does not determine whether active inference is tractable, not even when the number of actions and outcomes to be considered is restricted. Moreover, this conclusion appears to extend even to an approximate version of the problem. We believe these results can be of interest to both cognitive scientists seeking to evaluate the plausibility of their explanatory theories, and to researchers working on probabilistic models, as they relate to existing work on the hardness of observation selection in decision making.

Cite this Paper


BibTeX
@InProceedings{pmlr-v72-donselaar18a, title = {Parameterized hardness of active inference}, author = {Donselaar, Nils}, booktitle = {Proceedings of the Ninth International Conference on Probabilistic Graphical Models}, pages = {109--120}, year = {2018}, editor = {Kratochvíl, Václav and Studený, Milan}, volume = {72}, series = {Proceedings of Machine Learning Research}, month = {11--14 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v72/donselaar18a/donselaar18a.pdf}, url = {https://proceedings.mlr.press/v72/donselaar18a.html}, abstract = {Within the field of cognitive neuroscience, predictive processing is an increasingly popular unifying account of cognitive capacities including action and perception which posits that these rely on probabilistic generative models to predict sensory input or the consequences of one’s behaviour. In the corresponding literature one frequently encounters naive claims about the computational resources required to successfully employ such models, while actual complexity analyses are often lacking. In this paper we study the problem of selecting a course of action which yields the greatest reduction in prediction error between the intended outcome and the current state, known in this area as \textit{active inference}. Modelling the problem in terms of Bayesian networks and the relative entropy (Kullback-Leibler divergence) between a target and an induced distribution, we derive parameterized (in)tractability results extending the $\mathsf{NP}^{\mathsf{PP}}$-hardness classification found in Kwisthout 2014. These show that contrary to common belief, the size of the prediction error does not determine whether active inference is tractable, not even when the number of actions and outcomes to be considered is restricted. Moreover, this conclusion appears to extend even to an approximate version of the problem. We believe these results can be of interest to both cognitive scientists seeking to evaluate the plausibility of their explanatory theories, and to researchers working on probabilistic models, as they relate to existing work on the hardness of observation selection in decision making.} }
Endnote
%0 Conference Paper %T Parameterized hardness of active inference %A Nils Donselaar %B Proceedings of the Ninth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2018 %E Václav Kratochvíl %E Milan Studený %F pmlr-v72-donselaar18a %I PMLR %P 109--120 %U https://proceedings.mlr.press/v72/donselaar18a.html %V 72 %X Within the field of cognitive neuroscience, predictive processing is an increasingly popular unifying account of cognitive capacities including action and perception which posits that these rely on probabilistic generative models to predict sensory input or the consequences of one’s behaviour. In the corresponding literature one frequently encounters naive claims about the computational resources required to successfully employ such models, while actual complexity analyses are often lacking. In this paper we study the problem of selecting a course of action which yields the greatest reduction in prediction error between the intended outcome and the current state, known in this area as \textit{active inference}. Modelling the problem in terms of Bayesian networks and the relative entropy (Kullback-Leibler divergence) between a target and an induced distribution, we derive parameterized (in)tractability results extending the $\mathsf{NP}^{\mathsf{PP}}$-hardness classification found in Kwisthout 2014. These show that contrary to common belief, the size of the prediction error does not determine whether active inference is tractable, not even when the number of actions and outcomes to be considered is restricted. Moreover, this conclusion appears to extend even to an approximate version of the problem. We believe these results can be of interest to both cognitive scientists seeking to evaluate the plausibility of their explanatory theories, and to researchers working on probabilistic models, as they relate to existing work on the hardness of observation selection in decision making.
APA
Donselaar, N.. (2018). Parameterized hardness of active inference. Proceedings of the Ninth International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 72:109-120 Available from https://proceedings.mlr.press/v72/donselaar18a.html.

Related Material