Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance

Lisha Chen, Quan Xiao, Ellen Hidemi Fukuda, Xinyi Chen, Kun Yuan, Tianyi Chen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:9443-9486, 2025.

Abstract

Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint that the model parameters are weakly Pareto optimal. To solve this problem, we convert the multi-objective constraints to a single-objective constraint through a merit function with an easy-to-evaluate gradient, and then, we use a penalty-based reformulation of the bilevel optimization problem. We theoretically establish the properties of the merit function, and the relations of solutions for the penalty reformulation and the constrained formulation. Then we propose algorithms to solve the reformulated single-level problem, and establish its convergence guarantees. We test the method on various synthetic and real-world problems. The results demonstrate the effectiveness of the proposed method in finding preference-guided optimal solutions to the multi-objective problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chen25bw, title = {Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance}, author = {Chen, Lisha and Xiao, Quan and Fukuda, Ellen Hidemi and Chen, Xinyi and Yuan, Kun and Chen, Tianyi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {9443--9486}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chen25bw/chen25bw.pdf}, url = {https://proceedings.mlr.press/v267/chen25bw.html}, abstract = {Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint that the model parameters are weakly Pareto optimal. To solve this problem, we convert the multi-objective constraints to a single-objective constraint through a merit function with an easy-to-evaluate gradient, and then, we use a penalty-based reformulation of the bilevel optimization problem. We theoretically establish the properties of the merit function, and the relations of solutions for the penalty reformulation and the constrained formulation. Then we propose algorithms to solve the reformulated single-level problem, and establish its convergence guarantees. We test the method on various synthetic and real-world problems. The results demonstrate the effectiveness of the proposed method in finding preference-guided optimal solutions to the multi-objective problem.} }
Endnote
%0 Conference Paper %T Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance %A Lisha Chen %A Quan Xiao %A Ellen Hidemi Fukuda %A Xinyi Chen %A Kun Yuan %A Tianyi Chen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chen25bw %I PMLR %P 9443--9486 %U https://proceedings.mlr.press/v267/chen25bw.html %V 267 %X Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint that the model parameters are weakly Pareto optimal. To solve this problem, we convert the multi-objective constraints to a single-objective constraint through a merit function with an easy-to-evaluate gradient, and then, we use a penalty-based reformulation of the bilevel optimization problem. We theoretically establish the properties of the merit function, and the relations of solutions for the penalty reformulation and the constrained formulation. Then we propose algorithms to solve the reformulated single-level problem, and establish its convergence guarantees. We test the method on various synthetic and real-world problems. The results demonstrate the effectiveness of the proposed method in finding preference-guided optimal solutions to the multi-objective problem.
APA
Chen, L., Xiao, Q., Fukuda, E.H., Chen, X., Yuan, K. & Chen, T.. (2025). Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:9443-9486 Available from https://proceedings.mlr.press/v267/chen25bw.html.

Related Material