Dynamic Weights in Multi-Objective Deep Reinforcement Learning

Axel Abels, Diederik Roijers, Tom Lenaerts, Ann Nowé, Denis Steckelmacher
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:11-20, 2019.

Abstract

Many real-world decision problems are characterized by multiple conflicting objectives which must be balanced based on their relative importance. In the dynamic weights setting the relative importance changes over time and specialized algorithms that deal with such change, such as a tabular Reinforcement Learning (RL) algorithm by Natarajan and Tadepalli (2005), are required. However, this earlier work is not feasible for RL settings that necessitate the use of function approximators. We generalize across weight changes and high-dimensional inputs by proposing a multi-objective Q-network whose outputs are conditioned on the relative importance of objectives and we introduce Diverse Experience Replay (DER) to counter the inherent non-stationarity of the Dynamic Weights setting. We perform an extensive experimental evaluation and compare our methods to adapted algorithms from Deep Multi-Task/Multi-Objective Reinforcement Learning and show that our proposed network in combination with DER dominates these adapted algorithms across weight change scenarios and problem domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-abels19a, title = {Dynamic Weights in Multi-Objective Deep Reinforcement Learning}, author = {Abels, Axel and Roijers, Diederik and Lenaerts, Tom and Now{\'e}, Ann and Steckelmacher, Denis}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {11--20}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/abels19a/abels19a.pdf}, url = {https://proceedings.mlr.press/v97/abels19a.html}, abstract = {Many real-world decision problems are characterized by multiple conflicting objectives which must be balanced based on their relative importance. In the dynamic weights setting the relative importance changes over time and specialized algorithms that deal with such change, such as a tabular Reinforcement Learning (RL) algorithm by Natarajan and Tadepalli (2005), are required. However, this earlier work is not feasible for RL settings that necessitate the use of function approximators. We generalize across weight changes and high-dimensional inputs by proposing a multi-objective Q-network whose outputs are conditioned on the relative importance of objectives and we introduce Diverse Experience Replay (DER) to counter the inherent non-stationarity of the Dynamic Weights setting. We perform an extensive experimental evaluation and compare our methods to adapted algorithms from Deep Multi-Task/Multi-Objective Reinforcement Learning and show that our proposed network in combination with DER dominates these adapted algorithms across weight change scenarios and problem domains.} }
Endnote
%0 Conference Paper %T Dynamic Weights in Multi-Objective Deep Reinforcement Learning %A Axel Abels %A Diederik Roijers %A Tom Lenaerts %A Ann Nowé %A Denis Steckelmacher %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-abels19a %I PMLR %P 11--20 %U https://proceedings.mlr.press/v97/abels19a.html %V 97 %X Many real-world decision problems are characterized by multiple conflicting objectives which must be balanced based on their relative importance. In the dynamic weights setting the relative importance changes over time and specialized algorithms that deal with such change, such as a tabular Reinforcement Learning (RL) algorithm by Natarajan and Tadepalli (2005), are required. However, this earlier work is not feasible for RL settings that necessitate the use of function approximators. We generalize across weight changes and high-dimensional inputs by proposing a multi-objective Q-network whose outputs are conditioned on the relative importance of objectives and we introduce Diverse Experience Replay (DER) to counter the inherent non-stationarity of the Dynamic Weights setting. We perform an extensive experimental evaluation and compare our methods to adapted algorithms from Deep Multi-Task/Multi-Objective Reinforcement Learning and show that our proposed network in combination with DER dominates these adapted algorithms across weight change scenarios and problem domains.
APA
Abels, A., Roijers, D., Lenaerts, T., Nowé, A. & Steckelmacher, D.. (2019). Dynamic Weights in Multi-Objective Deep Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:11-20 Available from https://proceedings.mlr.press/v97/abels19a.html.

Related Material