[edit]
Optimizing Language Models for Human Preferences is a Causal Inference Problem
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:2250-2270, 2024.
Abstract
As large language models (LLMs) see greater use in academic and commercial settings, there is increasing interest in methods that allow language models to generate texts aligned with human preferences. In this paper, we present an initial exploration of language model optimization for human preferences from *direct outcome datasets*, where each sample consists of a text and an associated numerical outcome measuring the reader’s response. We first propose that language model optimization should be viewed as a *causal problem* to ensure that the model correctly learns the relationship between the text and the outcome. We formalize this causal language optimization problem, and we develop a method{—}*causal preference optimization* (CPO){—}that solves an unbiased surrogate objective for the problem. We further extend CPO with *doubly robust* CPO (DR-CPO), which reduces the variance of the surrogate objective while retaining provably strong guarantees on bias. Finally, we empirically demonstrate the effectiveness of (DR-)CPO in optimizing state-of-the-art LLMs for human preferences on direct outcome data, and we validate the robustness of DR-CPO under difficult confounding conditions.