How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control

Jacopo Teneggi, Matthew Tivnan, Web Stayman, Jeremias Sulam
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:33940-33960, 2023.

Abstract

Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term $K$-RCPS, which allows to $(i)$ provide entrywise calibrated intervals for future samples of any diffusion model, and $(ii)$ control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-teneggi23a, title = {How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control}, author = {Teneggi, Jacopo and Tivnan, Matthew and Stayman, Web and Sulam, Jeremias}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {33940--33960}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/teneggi23a/teneggi23a.pdf}, url = {https://proceedings.mlr.press/v202/teneggi23a.html}, abstract = {Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term $K$-RCPS, which allows to $(i)$ provide entrywise calibrated intervals for future samples of any diffusion model, and $(ii)$ control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance.} }
Endnote
%0 Conference Paper %T How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control %A Jacopo Teneggi %A Matthew Tivnan %A Web Stayman %A Jeremias Sulam %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-teneggi23a %I PMLR %P 33940--33960 %U https://proceedings.mlr.press/v202/teneggi23a.html %V 202 %X Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-box predictor. In this work, we focus on image-to-image regression tasks and we present a generalization of the Risk-Controlling Prediction Sets (RCPS) procedure, that we term $K$-RCPS, which allows to $(i)$ provide entrywise calibrated intervals for future samples of any diffusion model, and $(ii)$ control a certain notion of risk with respect to a ground truth image with minimal mean interval length. Differently from existing conformal risk control procedures, ours relies on a novel convex optimization approach that allows for multidimensional risk control while provably minimizing the mean interval length. We illustrate our approach on two real-world image denoising problems: on natural images of faces as well as on computed tomography (CT) scans of the abdomen, demonstrating state of the art performance.
APA
Teneggi, J., Tivnan, M., Stayman, W. & Sulam, J.. (2023). How to Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:33940-33960 Available from https://proceedings.mlr.press/v202/teneggi23a.html.

Related Material