Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models

Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu, Bo Zhang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1555-1584, 2022.

Abstract

Diffusion probabilistic models (DPMs) are a class of powerful deep generative models (DGMs). Despite their success, the iterative generation process over the full timesteps is much less efficient than other DGMs such as GANs. Thus, the generation performance on a subset of timesteps is crucial, which is greatly influenced by the covariance design in DPMs. In this work, we consider diagonal and full covariances to improve the expressive power of DPMs. We derive the optimal result for such covariances, and then correct it when the mean of DPMs is imperfect. Both the optimal and the corrected ones can be decomposed into terms of conditional expectations over functions of noise. Building upon it, we propose to estimate the optimal covariance and its correction given imperfect mean by learning these conditional expectations. Our method can be applied to DPMs with both discrete and continuous timesteps. We consider the diagonal covariance in our implementation for computational efficiency. For an efficient practical implementation, we adopt a parameter sharing scheme and a two-stage training process. Empirically, our method outperforms a wide variety of covariance design on likelihood results, and improves the sample quality especially on a small number of timesteps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-bao22d, title = {Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models}, author = {Bao, Fan and Li, Chongxuan and Sun, Jiacheng and Zhu, Jun and Zhang, Bo}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1555--1584}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/bao22d/bao22d.pdf}, url = {https://proceedings.mlr.press/v162/bao22d.html}, abstract = {Diffusion probabilistic models (DPMs) are a class of powerful deep generative models (DGMs). Despite their success, the iterative generation process over the full timesteps is much less efficient than other DGMs such as GANs. Thus, the generation performance on a subset of timesteps is crucial, which is greatly influenced by the covariance design in DPMs. In this work, we consider diagonal and full covariances to improve the expressive power of DPMs. We derive the optimal result for such covariances, and then correct it when the mean of DPMs is imperfect. Both the optimal and the corrected ones can be decomposed into terms of conditional expectations over functions of noise. Building upon it, we propose to estimate the optimal covariance and its correction given imperfect mean by learning these conditional expectations. Our method can be applied to DPMs with both discrete and continuous timesteps. We consider the diagonal covariance in our implementation for computational efficiency. For an efficient practical implementation, we adopt a parameter sharing scheme and a two-stage training process. Empirically, our method outperforms a wide variety of covariance design on likelihood results, and improves the sample quality especially on a small number of timesteps.} }
Endnote
%0 Conference Paper %T Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models %A Fan Bao %A Chongxuan Li %A Jiacheng Sun %A Jun Zhu %A Bo Zhang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-bao22d %I PMLR %P 1555--1584 %U https://proceedings.mlr.press/v162/bao22d.html %V 162 %X Diffusion probabilistic models (DPMs) are a class of powerful deep generative models (DGMs). Despite their success, the iterative generation process over the full timesteps is much less efficient than other DGMs such as GANs. Thus, the generation performance on a subset of timesteps is crucial, which is greatly influenced by the covariance design in DPMs. In this work, we consider diagonal and full covariances to improve the expressive power of DPMs. We derive the optimal result for such covariances, and then correct it when the mean of DPMs is imperfect. Both the optimal and the corrected ones can be decomposed into terms of conditional expectations over functions of noise. Building upon it, we propose to estimate the optimal covariance and its correction given imperfect mean by learning these conditional expectations. Our method can be applied to DPMs with both discrete and continuous timesteps. We consider the diagonal covariance in our implementation for computational efficiency. For an efficient practical implementation, we adopt a parameter sharing scheme and a two-stage training process. Empirically, our method outperforms a wide variety of covariance design on likelihood results, and improves the sample quality especially on a small number of timesteps.
APA
Bao, F., Li, C., Sun, J., Zhu, J. & Zhang, B.. (2022). Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1555-1584 Available from https://proceedings.mlr.press/v162/bao22d.html.

Related Material