Symbiotic Local Search for Small Decision Tree Policies in MDPs

Roman Andriushchenko, Milan Ceska, Debraj Chakraborty, Sebastian Junges, Jan Kretinsky, Filip Macák
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:132-148, 2025.

Abstract

We study decision making policies in Markov decision processes (MDPs). Two key performance indicators of such policies are their value and their interpretability. On the one hand, policies that optimize value can be efficiently computed via a plethora of standard methods. However, the representation of these policies may prevent their interpretability. On the other hand, policies with good interpretability, such as policies represented by a small decision tree, are computationally hard to obtain. This paper contributes a local search approach to find policies with good value, represented by small decision trees. Our local search symbiotically combines learning decision trees from value-optimal policies with symbolic approaches that optimize the size of the decision tree within a constrained neighborhood. Our empirical evaluation shows that this combination provides drastically smaller decision trees for MDPs that are significantly larger than what can be handled by optimal decision tree learners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-andriushchenko25a, title = {Symbiotic Local Search for Small Decision Tree Policies in MDPs}, author = {Andriushchenko, Roman and Ceska, Milan and Chakraborty, Debraj and Junges, Sebastian and Kretinsky, Jan and Mac\'{a}k, Filip}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {132--148}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/andriushchenko25a/andriushchenko25a.pdf}, url = {https://proceedings.mlr.press/v286/andriushchenko25a.html}, abstract = {We study decision making policies in Markov decision processes (MDPs). Two key performance indicators of such policies are their value and their interpretability. On the one hand, policies that optimize value can be efficiently computed via a plethora of standard methods. However, the representation of these policies may prevent their interpretability. On the other hand, policies with good interpretability, such as policies represented by a small decision tree, are computationally hard to obtain. This paper contributes a local search approach to find policies with good value, represented by small decision trees. Our local search symbiotically combines learning decision trees from value-optimal policies with symbolic approaches that optimize the size of the decision tree within a constrained neighborhood. Our empirical evaluation shows that this combination provides drastically smaller decision trees for MDPs that are significantly larger than what can be handled by optimal decision tree learners.} }
Endnote
%0 Conference Paper %T Symbiotic Local Search for Small Decision Tree Policies in MDPs %A Roman Andriushchenko %A Milan Ceska %A Debraj Chakraborty %A Sebastian Junges %A Jan Kretinsky %A Filip Macák %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-andriushchenko25a %I PMLR %P 132--148 %U https://proceedings.mlr.press/v286/andriushchenko25a.html %V 286 %X We study decision making policies in Markov decision processes (MDPs). Two key performance indicators of such policies are their value and their interpretability. On the one hand, policies that optimize value can be efficiently computed via a plethora of standard methods. However, the representation of these policies may prevent their interpretability. On the other hand, policies with good interpretability, such as policies represented by a small decision tree, are computationally hard to obtain. This paper contributes a local search approach to find policies with good value, represented by small decision trees. Our local search symbiotically combines learning decision trees from value-optimal policies with symbolic approaches that optimize the size of the decision tree within a constrained neighborhood. Our empirical evaluation shows that this combination provides drastically smaller decision trees for MDPs that are significantly larger than what can be handled by optimal decision tree learners.
APA
Andriushchenko, R., Ceska, M., Chakraborty, D., Junges, S., Kretinsky, J. & Macák, F.. (2025). Symbiotic Local Search for Small Decision Tree Policies in MDPs. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:132-148 Available from https://proceedings.mlr.press/v286/andriushchenko25a.html.

Related Material