Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning

Sebastian Weichwald, Søren Wengel Mogensen, Tabitha Edith Lee, Dominik Baumann, Oliver Kroemer, Isabelle Guyon, Sebastian Trimpe, Jonas Peters, Niklas Pfister
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, PMLR 176:246-258, 2022.

Abstract

Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v176-weichwald22a, title = {Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning}, author = {Weichwald, Sebastian and Mogensen, S{\o}ren Wengel and Lee, Tabitha Edith and Baumann, Dominik and Kroemer, Oliver and Guyon, Isabelle and Trimpe, Sebastian and Peters, Jonas and Pfister, Niklas}, booktitle = {Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track}, pages = {246--258}, year = {2022}, editor = {Kiela, Douwe and Ciccone, Marco and Caputo, Barbara}, volume = {176}, series = {Proceedings of Machine Learning Research}, month = {06--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v176/weichwald22a/weichwald22a.pdf}, url = {https://proceedings.mlr.press/v176/weichwald22a.html}, abstract = {Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.} }
Endnote
%0 Conference Paper %T Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning %A Sebastian Weichwald %A Søren Wengel Mogensen %A Tabitha Edith Lee %A Dominik Baumann %A Oliver Kroemer %A Isabelle Guyon %A Sebastian Trimpe %A Jonas Peters %A Niklas Pfister %B Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track %C Proceedings of Machine Learning Research %D 2022 %E Douwe Kiela %E Marco Ciccone %E Barbara Caputo %F pmlr-v176-weichwald22a %I PMLR %P 246--258 %U https://proceedings.mlr.press/v176/weichwald22a.html %V 176 %X Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.
APA
Weichwald, S., Mogensen, S.W., Lee, T.E., Baumann, D., Kroemer, O., Guyon, I., Trimpe, S., Peters, J. & Pfister, N.. (2022). Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning. Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, in Proceedings of Machine Learning Research 176:246-258 Available from https://proceedings.mlr.press/v176/weichwald22a.html.

Related Material