What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience

Alex Kearney, Anna Koop, Johannes Günther, Patrick M. Pilarski
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:604-616, 2022.

Abstract

In computational reinforcement learning, a growing body of work seeks to construct an agent’s perception of the world through predictions of future sensations; predictions about environment observations are used as additional input features to enable better goal-directed decision-making. An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making. This challenge is especially apparent in continual learning problems where a single stream of experience is available to a singular agent. As a primary contribution, we introduce a meta-gradient descent process by which an agent learns 1) what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use learned estimates to generate policies that maximize future reward—all during a single ongoing process of continual learning. In this manuscript we consider predictions expressed as General Value Functions: temporally extended estimates of the accumulation of a future signal. We demonstrate that through interaction with the environment an agent can independently select predictions that resolve partial-observability, resulting in performance similar to, or better than expertly chosen General Value Functions in two domains. By learning, rather than manually specifying these predictions, we enable the agent to identify useful predictions in a self-supervised manner, taking a step towards truly autonomous systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-kearney22a, title = {What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience}, author = {Kearney, Alex and Koop, Anna and G\"unther, Johannes and Pilarski, Patrick M.}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {604--616}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/kearney22a/kearney22a.pdf}, url = {https://proceedings.mlr.press/v199/kearney22a.html}, abstract = {In computational reinforcement learning, a growing body of work seeks to construct an agent’s perception of the world through predictions of future sensations; predictions about environment observations are used as additional input features to enable better goal-directed decision-making. An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making. This challenge is especially apparent in continual learning problems where a single stream of experience is available to a singular agent. As a primary contribution, we introduce a meta-gradient descent process by which an agent learns 1) what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use learned estimates to generate policies that maximize future reward—all during a single ongoing process of continual learning. In this manuscript we consider predictions expressed as General Value Functions: temporally extended estimates of the accumulation of a future signal. We demonstrate that through interaction with the environment an agent can independently select predictions that resolve partial-observability, resulting in performance similar to, or better than expertly chosen General Value Functions in two domains. By learning, rather than manually specifying these predictions, we enable the agent to identify useful predictions in a self-supervised manner, taking a step towards truly autonomous systems.} }
Endnote
%0 Conference Paper %T What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience %A Alex Kearney %A Anna Koop %A Johannes Günther %A Patrick M. Pilarski %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-kearney22a %I PMLR %P 604--616 %U https://proceedings.mlr.press/v199/kearney22a.html %V 199 %X In computational reinforcement learning, a growing body of work seeks to construct an agent’s perception of the world through predictions of future sensations; predictions about environment observations are used as additional input features to enable better goal-directed decision-making. An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making. This challenge is especially apparent in continual learning problems where a single stream of experience is available to a singular agent. As a primary contribution, we introduce a meta-gradient descent process by which an agent learns 1) what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use learned estimates to generate policies that maximize future reward—all during a single ongoing process of continual learning. In this manuscript we consider predictions expressed as General Value Functions: temporally extended estimates of the accumulation of a future signal. We demonstrate that through interaction with the environment an agent can independently select predictions that resolve partial-observability, resulting in performance similar to, or better than expertly chosen General Value Functions in two domains. By learning, rather than manually specifying these predictions, we enable the agent to identify useful predictions in a self-supervised manner, taking a step towards truly autonomous systems.
APA
Kearney, A., Koop, A., Günther, J. & Pilarski, P.M.. (2022). What Should I Know? Using Meta-Gradient Descent for Predictive Feature Discovery in a Single Stream of Experience. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:604-616 Available from https://proceedings.mlr.press/v199/kearney22a.html.

Related Material