A distributional view on multi-objective policy optimization

Abbas Abdolmaleki, Sandy Huang, Leonard Hasenclever, Michael Neunert, Francis Song, Martina Zambelli, Murilo Martins, Nicolas Heess, Raia Hadsell, Martin Riedmiller
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11-22, 2020.

Abstract

Many real-world problems require trading off multiple competing objectives. However, these objectives are often in different units and/or scales, which can make it challenging for practitioners to express numerical preferences over objectives in their native units. In this paper we propose a novel algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way. We propose to learn an action distribution for each objective, and we use supervised learning to fit a parametric policy to a combination of these distributions. We demonstrate the effectiveness of our approach on challenging high-dimensional real and simulated robotics tasks, and show that setting different preferences in our framework allows us to trace out the space of nondominated solutions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-abdolmaleki20a, title = {A distributional view on multi-objective policy optimization}, author = {Abdolmaleki, Abbas and Huang, Sandy and Hasenclever, Leonard and Neunert, Michael and Song, Francis and Zambelli, Martina and Martins, Murilo and Heess, Nicolas and Hadsell, Raia and Riedmiller, Martin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11--22}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/abdolmaleki20a/abdolmaleki20a.pdf}, url = {https://proceedings.mlr.press/v119/abdolmaleki20a.html}, abstract = {Many real-world problems require trading off multiple competing objectives. However, these objectives are often in different units and/or scales, which can make it challenging for practitioners to express numerical preferences over objectives in their native units. In this paper we propose a novel algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way. We propose to learn an action distribution for each objective, and we use supervised learning to fit a parametric policy to a combination of these distributions. We demonstrate the effectiveness of our approach on challenging high-dimensional real and simulated robotics tasks, and show that setting different preferences in our framework allows us to trace out the space of nondominated solutions.} }
Endnote
%0 Conference Paper %T A distributional view on multi-objective policy optimization %A Abbas Abdolmaleki %A Sandy Huang %A Leonard Hasenclever %A Michael Neunert %A Francis Song %A Martina Zambelli %A Murilo Martins %A Nicolas Heess %A Raia Hadsell %A Martin Riedmiller %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-abdolmaleki20a %I PMLR %P 11--22 %U https://proceedings.mlr.press/v119/abdolmaleki20a.html %V 119 %X Many real-world problems require trading off multiple competing objectives. However, these objectives are often in different units and/or scales, which can make it challenging for practitioners to express numerical preferences over objectives in their native units. In this paper we propose a novel algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way. We propose to learn an action distribution for each objective, and we use supervised learning to fit a parametric policy to a combination of these distributions. We demonstrate the effectiveness of our approach on challenging high-dimensional real and simulated robotics tasks, and show that setting different preferences in our framework allows us to trace out the space of nondominated solutions.
APA
Abdolmaleki, A., Huang, S., Hasenclever, L., Neunert, M., Song, F., Zambelli, M., Martins, M., Heess, N., Hadsell, R. & Riedmiller, M.. (2020). A distributional view on multi-objective policy optimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11-22 Available from https://proceedings.mlr.press/v119/abdolmaleki20a.html.

Related Material