Learning Transparent Reward Models via Unsupervised Feature Selection

Daulet Baimukashev, Gokhan Alcan, Kevin Sebastian Luck, Ville Kyrki
Proceedings of The 8th Conference on Robot Learning, PMLR 270:5312-5325, 2025.

Abstract

In complex real-world tasks such as robotic manipulation and autonomous driving, collecting expert demonstrations is often more straightforward than specifying precise learning objectives and task descriptions. Learning from expert data can be achieved through behavioral cloning or by learning a reward function, i.e., inverse reinforcement learning. The latter allows for training with additional data outside the training distribution, guided by the inferred reward function. We propose a novel approach to construct compact and interpretable reward models from automatically selected state features. These inferred rewards have an explicit form and enable the learning of policies that closely match expert behavior by training standard reinforcement learning algorithms from scratch. We validate our method’s performance in various robotic environments with continuous and high-dimensional state spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-baimukashev25a, title = {Learning Transparent Reward Models via Unsupervised Feature Selection}, author = {Baimukashev, Daulet and Alcan, Gokhan and Luck, Kevin Sebastian and Kyrki, Ville}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {5312--5325}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/baimukashev25a/baimukashev25a.pdf}, url = {https://proceedings.mlr.press/v270/baimukashev25a.html}, abstract = {In complex real-world tasks such as robotic manipulation and autonomous driving, collecting expert demonstrations is often more straightforward than specifying precise learning objectives and task descriptions. Learning from expert data can be achieved through behavioral cloning or by learning a reward function, i.e., inverse reinforcement learning. The latter allows for training with additional data outside the training distribution, guided by the inferred reward function. We propose a novel approach to construct compact and interpretable reward models from automatically selected state features. These inferred rewards have an explicit form and enable the learning of policies that closely match expert behavior by training standard reinforcement learning algorithms from scratch. We validate our method’s performance in various robotic environments with continuous and high-dimensional state spaces.} }
Endnote
%0 Conference Paper %T Learning Transparent Reward Models via Unsupervised Feature Selection %A Daulet Baimukashev %A Gokhan Alcan %A Kevin Sebastian Luck %A Ville Kyrki %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-baimukashev25a %I PMLR %P 5312--5325 %U https://proceedings.mlr.press/v270/baimukashev25a.html %V 270 %X In complex real-world tasks such as robotic manipulation and autonomous driving, collecting expert demonstrations is often more straightforward than specifying precise learning objectives and task descriptions. Learning from expert data can be achieved through behavioral cloning or by learning a reward function, i.e., inverse reinforcement learning. The latter allows for training with additional data outside the training distribution, guided by the inferred reward function. We propose a novel approach to construct compact and interpretable reward models from automatically selected state features. These inferred rewards have an explicit form and enable the learning of policies that closely match expert behavior by training standard reinforcement learning algorithms from scratch. We validate our method’s performance in various robotic environments with continuous and high-dimensional state spaces.
APA
Baimukashev, D., Alcan, G., Luck, K.S. & Kyrki, V.. (2025). Learning Transparent Reward Models via Unsupervised Feature Selection. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:5312-5325 Available from https://proceedings.mlr.press/v270/baimukashev25a.html.

Related Material