Deciding What to Learn: A Rate-Distortion Approach

Dilip Arumugam, Benjamin Van Roy
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:373-382, 2021.

Abstract

Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off between the information an agent must acquire to learn and the sub-optimality of the resulting policy. While an agent designer has a preference for how this trade-off is resolved, existing approaches further require that the designer translate these preferences into a fixed learning target for the agent. In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. We establish a general bound on expected discounted regret for an agent that decides what to learn in this manner along with computational experiments that illustrate the expressiveness of designer preferences and even show improvements over Thompson sampling in identifying an optimal policy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-arumugam21a, title = {Deciding What to Learn: A Rate-Distortion Approach}, author = {Arumugam, Dilip and Van Roy, Benjamin}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {373--382}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/arumugam21a/arumugam21a.pdf}, url = {https://proceedings.mlr.press/v139/arumugam21a.html}, abstract = {Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off between the information an agent must acquire to learn and the sub-optimality of the resulting policy. While an agent designer has a preference for how this trade-off is resolved, existing approaches further require that the designer translate these preferences into a fixed learning target for the agent. In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. We establish a general bound on expected discounted regret for an agent that decides what to learn in this manner along with computational experiments that illustrate the expressiveness of designer preferences and even show improvements over Thompson sampling in identifying an optimal policy.} }
Endnote
%0 Conference Paper %T Deciding What to Learn: A Rate-Distortion Approach %A Dilip Arumugam %A Benjamin Van Roy %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-arumugam21a %I PMLR %P 373--382 %U https://proceedings.mlr.press/v139/arumugam21a.html %V 139 %X Agents that learn to select optimal actions represent a prominent focus of the sequential decision-making literature. In the face of a complex environment or constraints on time and resources, however, aiming to synthesize such an optimal policy can become infeasible. These scenarios give rise to an important trade-off between the information an agent must acquire to learn and the sub-optimality of the resulting policy. While an agent designer has a preference for how this trade-off is resolved, existing approaches further require that the designer translate these preferences into a fixed learning target for the agent. In this work, leveraging rate-distortion theory, we automate this process such that the designer need only express their preferences via a single hyperparameter and the agent is endowed with the ability to compute its own learning targets that best achieve the desired trade-off. We establish a general bound on expected discounted regret for an agent that decides what to learn in this manner along with computational experiments that illustrate the expressiveness of designer preferences and even show improvements over Thompson sampling in identifying an optimal policy.
APA
Arumugam, D. & Van Roy, B.. (2021). Deciding What to Learn: A Rate-Distortion Approach. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:373-382 Available from https://proceedings.mlr.press/v139/arumugam21a.html.

Related Material