Human inductive biases for aversive continual learning — a hierarchical Bayesian nonparametric model

Sashank Pisupati, Isabel M Berwian, Jamie Chiu, Yongjing Ren, Yael Niv
Proceedings of The 2nd Conference on Lifelong Learning Agents, PMLR 232:337-346, 2023.

Abstract

Humans and animals often display remarkable continual learning abilities, adapting quickly to changing environments while retaining, reusing, and accumulating old knowledge over a lifetime. Unfortunately, in environments with adverse outcomes, the inductive biases supporting such forms of learning can turn maladaptive, yielding persistent negative beliefs that are hard to extinguish, such as those prevalent in anxiety disorders. Here, we present and model human behavioral data from a fear-conditioning task with changing latent contexts, in which participants had to predict whether visual stimuli would be followed by an aversive scream. We show that participants’ learning in our task spans three different regimes — with old knowledge either being updated, discarded (forgotten) or retained and reused in new contexts (remembered) by different participants. The latter regime corresponds to (maladaptive) spontaneous recovery of fear. We demonstrate using simulations that these behavioral regimes can be captured by varying inductive biases in Bayesian non-parametric models of contextual learning. In particular, we show that the “remembering" regime can be produced by “persistent" variants of hierarchical Dirichlet process priors over contexts and negatively biased “deterministic" beta distribution priors over outcomes. Such inductive biases correspond well to widely observed “core beliefs" that may have adaptive value in some lifelong-learning environments, at the cost of being maladaptive in other environments and tasks such as ours. Our work offers a tractable window into human inductive biases for continual learning algorithms, and could potentially help identify individual differences in learning strategies relevant for response to psychotherapy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v232-pisupati23a, title = {Human inductive biases for aversive continual learning — a hierarchical Bayesian nonparametric model}, author = {Pisupati, Sashank and Berwian, Isabel M and Chiu, Jamie and Ren, Yongjing and Niv, Yael}, booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents}, pages = {337--346}, year = {2023}, editor = {Chandar, Sarath and Pascanu, Razvan and Sedghi, Hanie and Precup, Doina}, volume = {232}, series = {Proceedings of Machine Learning Research}, month = {22--25 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v232/pisupati23a/pisupati23a.pdf}, url = {https://proceedings.mlr.press/v232/pisupati23a.html}, abstract = {Humans and animals often display remarkable continual learning abilities, adapting quickly to changing environments while retaining, reusing, and accumulating old knowledge over a lifetime. Unfortunately, in environments with adverse outcomes, the inductive biases supporting such forms of learning can turn maladaptive, yielding persistent negative beliefs that are hard to extinguish, such as those prevalent in anxiety disorders. Here, we present and model human behavioral data from a fear-conditioning task with changing latent contexts, in which participants had to predict whether visual stimuli would be followed by an aversive scream. We show that participants’ learning in our task spans three different regimes — with old knowledge either being updated, discarded (forgotten) or retained and reused in new contexts (remembered) by different participants. The latter regime corresponds to (maladaptive) spontaneous recovery of fear. We demonstrate using simulations that these behavioral regimes can be captured by varying inductive biases in Bayesian non-parametric models of contextual learning. In particular, we show that the “remembering" regime can be produced by “persistent" variants of hierarchical Dirichlet process priors over contexts and negatively biased “deterministic" beta distribution priors over outcomes. Such inductive biases correspond well to widely observed “core beliefs" that may have adaptive value in some lifelong-learning environments, at the cost of being maladaptive in other environments and tasks such as ours. Our work offers a tractable window into human inductive biases for continual learning algorithms, and could potentially help identify individual differences in learning strategies relevant for response to psychotherapy.} }
Endnote
%0 Conference Paper %T Human inductive biases for aversive continual learning — a hierarchical Bayesian nonparametric model %A Sashank Pisupati %A Isabel M Berwian %A Jamie Chiu %A Yongjing Ren %A Yael Niv %B Proceedings of The 2nd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2023 %E Sarath Chandar %E Razvan Pascanu %E Hanie Sedghi %E Doina Precup %F pmlr-v232-pisupati23a %I PMLR %P 337--346 %U https://proceedings.mlr.press/v232/pisupati23a.html %V 232 %X Humans and animals often display remarkable continual learning abilities, adapting quickly to changing environments while retaining, reusing, and accumulating old knowledge over a lifetime. Unfortunately, in environments with adverse outcomes, the inductive biases supporting such forms of learning can turn maladaptive, yielding persistent negative beliefs that are hard to extinguish, such as those prevalent in anxiety disorders. Here, we present and model human behavioral data from a fear-conditioning task with changing latent contexts, in which participants had to predict whether visual stimuli would be followed by an aversive scream. We show that participants’ learning in our task spans three different regimes — with old knowledge either being updated, discarded (forgotten) or retained and reused in new contexts (remembered) by different participants. The latter regime corresponds to (maladaptive) spontaneous recovery of fear. We demonstrate using simulations that these behavioral regimes can be captured by varying inductive biases in Bayesian non-parametric models of contextual learning. In particular, we show that the “remembering" regime can be produced by “persistent" variants of hierarchical Dirichlet process priors over contexts and negatively biased “deterministic" beta distribution priors over outcomes. Such inductive biases correspond well to widely observed “core beliefs" that may have adaptive value in some lifelong-learning environments, at the cost of being maladaptive in other environments and tasks such as ours. Our work offers a tractable window into human inductive biases for continual learning algorithms, and could potentially help identify individual differences in learning strategies relevant for response to psychotherapy.
APA
Pisupati, S., Berwian, I.M., Chiu, J., Ren, Y. & Niv, Y.. (2023). Human inductive biases for aversive continual learning — a hierarchical Bayesian nonparametric model. Proceedings of The 2nd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 232:337-346 Available from https://proceedings.mlr.press/v232/pisupati23a.html.

Related Material