DF$^2$: Distribution-Free Decision-Focused Learning

Lingkai Kong, Wenhao Mu, Jiaming Cui, Yuchen Zhuang, B. Aditya Prakash, Bo Dai, Chao Zhang
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2269-2290, 2025.

Abstract

Decision-focused learning (DFL), which differentiates through the KKT conditions, has recently emerged as a powerful approach for predict-then-optimize problems. However, under probabilistic settings, DFL faces three major bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error. Model mismatch error stems from the misalignment between the model’s parameterized predictive distribution and the true probability distribution. Sample average approximation error arises when using finite samples to approximate the expected optimization objective. Gradient approximation error occurs when the objectives are non-convex and KKT conditions cannot be directly applied. In this paper, we present DF$^2$-the first \textit{distribution-free} decision-focused learning method designed to mitigate these three bottlenecks. Rather than depending on a task-specific forecaster that requires precise model assumptions, our method directly learns the expected optimization function during training. To efficiently learn the function in a data-driven manner, we devise an attention-based model architecture inspired by the distribution-based parameterization of the expected objective. We evaluate DF$^2$ on two synthetic problems and three real-world problems, demonstrating the effectiveness of DF$^2$. Our code can be found at: https://github.com/Lingkai-Kong/DF2.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-kong25a, title = {DF$^2$: Distribution-Free Decision-Focused Learning}, author = {Kong, Lingkai and Mu, Wenhao and Cui, Jiaming and Zhuang, Yuchen and Prakash, B. Aditya and Dai, Bo and Zhang, Chao}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2269--2290}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/kong25a/kong25a.pdf}, url = {https://proceedings.mlr.press/v286/kong25a.html}, abstract = {Decision-focused learning (DFL), which differentiates through the KKT conditions, has recently emerged as a powerful approach for predict-then-optimize problems. However, under probabilistic settings, DFL faces three major bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error. Model mismatch error stems from the misalignment between the model’s parameterized predictive distribution and the true probability distribution. Sample average approximation error arises when using finite samples to approximate the expected optimization objective. Gradient approximation error occurs when the objectives are non-convex and KKT conditions cannot be directly applied. In this paper, we present DF$^2$-the first \textit{distribution-free} decision-focused learning method designed to mitigate these three bottlenecks. Rather than depending on a task-specific forecaster that requires precise model assumptions, our method directly learns the expected optimization function during training. To efficiently learn the function in a data-driven manner, we devise an attention-based model architecture inspired by the distribution-based parameterization of the expected objective. We evaluate DF$^2$ on two synthetic problems and three real-world problems, demonstrating the effectiveness of DF$^2$. Our code can be found at: https://github.com/Lingkai-Kong/DF2.} }
Endnote
%0 Conference Paper %T DF$^2$: Distribution-Free Decision-Focused Learning %A Lingkai Kong %A Wenhao Mu %A Jiaming Cui %A Yuchen Zhuang %A B. Aditya Prakash %A Bo Dai %A Chao Zhang %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-kong25a %I PMLR %P 2269--2290 %U https://proceedings.mlr.press/v286/kong25a.html %V 286 %X Decision-focused learning (DFL), which differentiates through the KKT conditions, has recently emerged as a powerful approach for predict-then-optimize problems. However, under probabilistic settings, DFL faces three major bottlenecks: model mismatch error, sample average approximation error, and gradient approximation error. Model mismatch error stems from the misalignment between the model’s parameterized predictive distribution and the true probability distribution. Sample average approximation error arises when using finite samples to approximate the expected optimization objective. Gradient approximation error occurs when the objectives are non-convex and KKT conditions cannot be directly applied. In this paper, we present DF$^2$-the first \textit{distribution-free} decision-focused learning method designed to mitigate these three bottlenecks. Rather than depending on a task-specific forecaster that requires precise model assumptions, our method directly learns the expected optimization function during training. To efficiently learn the function in a data-driven manner, we devise an attention-based model architecture inspired by the distribution-based parameterization of the expected objective. We evaluate DF$^2$ on two synthetic problems and three real-world problems, demonstrating the effectiveness of DF$^2$. Our code can be found at: https://github.com/Lingkai-Kong/DF2.
APA
Kong, L., Mu, W., Cui, J., Zhuang, Y., Prakash, B.A., Dai, B. & Zhang, C.. (2025). DF$^2$: Distribution-Free Decision-Focused Learning. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2269-2290 Available from https://proceedings.mlr.press/v286/kong25a.html.

Related Material