Conditions on Preference Relations that Guarantee the Existence of Optimal Policies

Jonathan Colaço Carr, Prakash Panangaden, Doina Precup
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3916-3924, 2024.

Abstract

Learning from Preferential Feedback (LfPF) plays an essential role in training Large Language Models, as well as certain types of interactive learning agents. However, a substantial gap exists between the theory and application of LfPF algorithms. Current results guaranteeing the existence of optimal policies in LfPF problems assume that both the preferences and transition dynamics are determined by a Markov Decision Process. We introduce the Direct Preference Process, a new framework for analyzing LfPF problems in partially-observable, non-Markovian environments. Within this framework, we establish conditions that guarantee the existence of optimal policies by considering the ordinal structure of the preferences. We show that a decision-making problem can have optimal policies – that are characterized by recursive optimality equations – even when no reward function can express the learning goal. These findings underline the need to explore preference-based learning strategies which do not assume that preferences are generated by reward.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-colaco-carr24a, title = { Conditions on Preference Relations that Guarantee the Existence of Optimal Policies }, author = {Cola\c{c}o Carr, Jonathan and Panangaden, Prakash and Precup, Doina}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3916--3924}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/colaco-carr24a/colaco-carr24a.pdf}, url = {https://proceedings.mlr.press/v238/colaco-carr24a.html}, abstract = { Learning from Preferential Feedback (LfPF) plays an essential role in training Large Language Models, as well as certain types of interactive learning agents. However, a substantial gap exists between the theory and application of LfPF algorithms. Current results guaranteeing the existence of optimal policies in LfPF problems assume that both the preferences and transition dynamics are determined by a Markov Decision Process. We introduce the Direct Preference Process, a new framework for analyzing LfPF problems in partially-observable, non-Markovian environments. Within this framework, we establish conditions that guarantee the existence of optimal policies by considering the ordinal structure of the preferences. We show that a decision-making problem can have optimal policies – that are characterized by recursive optimality equations – even when no reward function can express the learning goal. These findings underline the need to explore preference-based learning strategies which do not assume that preferences are generated by reward. } }
Endnote
%0 Conference Paper %T Conditions on Preference Relations that Guarantee the Existence of Optimal Policies %A Jonathan Colaço Carr %A Prakash Panangaden %A Doina Precup %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-colaco-carr24a %I PMLR %P 3916--3924 %U https://proceedings.mlr.press/v238/colaco-carr24a.html %V 238 %X Learning from Preferential Feedback (LfPF) plays an essential role in training Large Language Models, as well as certain types of interactive learning agents. However, a substantial gap exists between the theory and application of LfPF algorithms. Current results guaranteeing the existence of optimal policies in LfPF problems assume that both the preferences and transition dynamics are determined by a Markov Decision Process. We introduce the Direct Preference Process, a new framework for analyzing LfPF problems in partially-observable, non-Markovian environments. Within this framework, we establish conditions that guarantee the existence of optimal policies by considering the ordinal structure of the preferences. We show that a decision-making problem can have optimal policies – that are characterized by recursive optimality equations – even when no reward function can express the learning goal. These findings underline the need to explore preference-based learning strategies which do not assume that preferences are generated by reward.
APA
Colaço Carr, J., Panangaden, P. & Precup, D.. (2024). Conditions on Preference Relations that Guarantee the Existence of Optimal Policies . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3916-3924 Available from https://proceedings.mlr.press/v238/colaco-carr24a.html.

Related Material