COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework

Yinuo Ren, Tesi Xiao, Michael Shavlovsky, Lexing Ying, Holakou Rahmanian
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:3525-3551, 2025.

Abstract

In LLM alignment and many other ML applications, one often faces the *Multi-Objective Fine-Tuning* (MOFT) problem, *i.e.*, fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose a *Conditioned One-Shot* fine-tuning framework (COS-DPO) that extends the Direct Preference Optimization technique, originally developed for efficient LLM alignment with preference data, to accommodate the MOFT settings. By direct conditioning on the weight across auxiliary objectives, our Weight-COS-DPO method enjoys an efficient one-shot training process for profiling the Pareto front and is capable of achieving comprehensive trade-off solutions even in the post-training stage. Based on our theoretical findings on the linear transformation properties of the loss function, we further propose the Temperature-COS-DPO method that augments the temperature parameter to the model input, enhancing the flexibility of post-training control over the trade-offs between the main and auxiliary objectives. We demonstrate the effectiveness and efficiency of the COS-DPO framework through its applications to various tasks, including the Learning-to-Rank (LTR) and LLM alignment tasks, highlighting its viability for large-scale ML deployments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-ren25a, title = {COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework}, author = {Ren, Yinuo and Xiao, Tesi and Shavlovsky, Michael and Ying, Lexing and Rahmanian, Holakou}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {3525--3551}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/ren25a/ren25a.pdf}, url = {https://proceedings.mlr.press/v286/ren25a.html}, abstract = {In LLM alignment and many other ML applications, one often faces the *Multi-Objective Fine-Tuning* (MOFT) problem, *i.e.*, fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose a *Conditioned One-Shot* fine-tuning framework (COS-DPO) that extends the Direct Preference Optimization technique, originally developed for efficient LLM alignment with preference data, to accommodate the MOFT settings. By direct conditioning on the weight across auxiliary objectives, our Weight-COS-DPO method enjoys an efficient one-shot training process for profiling the Pareto front and is capable of achieving comprehensive trade-off solutions even in the post-training stage. Based on our theoretical findings on the linear transformation properties of the loss function, we further propose the Temperature-COS-DPO method that augments the temperature parameter to the model input, enhancing the flexibility of post-training control over the trade-offs between the main and auxiliary objectives. We demonstrate the effectiveness and efficiency of the COS-DPO framework through its applications to various tasks, including the Learning-to-Rank (LTR) and LLM alignment tasks, highlighting its viability for large-scale ML deployments.} }
Endnote
%0 Conference Paper %T COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework %A Yinuo Ren %A Tesi Xiao %A Michael Shavlovsky %A Lexing Ying %A Holakou Rahmanian %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-ren25a %I PMLR %P 3525--3551 %U https://proceedings.mlr.press/v286/ren25a.html %V 286 %X In LLM alignment and many other ML applications, one often faces the *Multi-Objective Fine-Tuning* (MOFT) problem, *i.e.*, fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose a *Conditioned One-Shot* fine-tuning framework (COS-DPO) that extends the Direct Preference Optimization technique, originally developed for efficient LLM alignment with preference data, to accommodate the MOFT settings. By direct conditioning on the weight across auxiliary objectives, our Weight-COS-DPO method enjoys an efficient one-shot training process for profiling the Pareto front and is capable of achieving comprehensive trade-off solutions even in the post-training stage. Based on our theoretical findings on the linear transformation properties of the loss function, we further propose the Temperature-COS-DPO method that augments the temperature parameter to the model input, enhancing the flexibility of post-training control over the trade-offs between the main and auxiliary objectives. We demonstrate the effectiveness and efficiency of the COS-DPO framework through its applications to various tasks, including the Learning-to-Rank (LTR) and LLM alignment tasks, highlighting its viability for large-scale ML deployments.
APA
Ren, Y., Xiao, T., Shavlovsky, M., Ying, L. & Rahmanian, H.. (2025). COS-DPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:3525-3551 Available from https://proceedings.mlr.press/v286/ren25a.html.

Related Material