SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity

Peihao Wang, Zhiwen Fan, Dejia Xu, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, Vikas Chandra
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4024-4032, 2025.

Abstract

Score distillation has emerged as one of the most prevalent approaches for text-to-3D asset synthesis. Essentially, score distillation updates 3D parameters by lifting and back-propagating scores averaged over different views. In this paper, we reveal that the gradient estimation in score distillation is inherent to high variance. Through the lens of variance reduction, the effectiveness of SDS and VSD can be interpreted as applications of various control variates to the Monte Carlo estimator of the distilled score. Motivated by this rethinking and based on Stein’s identity, we propose a more general solution to reduce variance for score distillation, termed \emph{Stein Score Distillation (SSD)}. SSD incorporates control variates constructed by Stein identity, allowing for arbitrary baseline functions. This enables us to include flexible guidance priors and network architectures to explicitly optimize for variance reduction. In our experiments, the overall pipeline, dubbed \emph{SteinDreamer}, is implemented by instantiating the control variate with a monocular depth estimator. The results show that SSD can effectively reduce the distillation variance and consistently improve visual quality for both object- and scene-level generation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-wang25j, title = {SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity}, author = {Wang, Peihao and Fan, Zhiwen and Xu, Dejia and Wang, Dilin and Mohan, Sreyas and Iandola, Forrest and Ranjan, Rakesh and Li, Yilei and Liu, Qiang and Wang, Zhangyang and Chandra, Vikas}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4024--4032}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/wang25j/wang25j.pdf}, url = {https://proceedings.mlr.press/v258/wang25j.html}, abstract = {Score distillation has emerged as one of the most prevalent approaches for text-to-3D asset synthesis. Essentially, score distillation updates 3D parameters by lifting and back-propagating scores averaged over different views. In this paper, we reveal that the gradient estimation in score distillation is inherent to high variance. Through the lens of variance reduction, the effectiveness of SDS and VSD can be interpreted as applications of various control variates to the Monte Carlo estimator of the distilled score. Motivated by this rethinking and based on Stein’s identity, we propose a more general solution to reduce variance for score distillation, termed \emph{Stein Score Distillation (SSD)}. SSD incorporates control variates constructed by Stein identity, allowing for arbitrary baseline functions. This enables us to include flexible guidance priors and network architectures to explicitly optimize for variance reduction. In our experiments, the overall pipeline, dubbed \emph{SteinDreamer}, is implemented by instantiating the control variate with a monocular depth estimator. The results show that SSD can effectively reduce the distillation variance and consistently improve visual quality for both object- and scene-level generation.} }
Endnote
%0 Conference Paper %T SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity %A Peihao Wang %A Zhiwen Fan %A Dejia Xu %A Dilin Wang %A Sreyas Mohan %A Forrest Iandola %A Rakesh Ranjan %A Yilei Li %A Qiang Liu %A Zhangyang Wang %A Vikas Chandra %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-wang25j %I PMLR %P 4024--4032 %U https://proceedings.mlr.press/v258/wang25j.html %V 258 %X Score distillation has emerged as one of the most prevalent approaches for text-to-3D asset synthesis. Essentially, score distillation updates 3D parameters by lifting and back-propagating scores averaged over different views. In this paper, we reveal that the gradient estimation in score distillation is inherent to high variance. Through the lens of variance reduction, the effectiveness of SDS and VSD can be interpreted as applications of various control variates to the Monte Carlo estimator of the distilled score. Motivated by this rethinking and based on Stein’s identity, we propose a more general solution to reduce variance for score distillation, termed \emph{Stein Score Distillation (SSD)}. SSD incorporates control variates constructed by Stein identity, allowing for arbitrary baseline functions. This enables us to include flexible guidance priors and network architectures to explicitly optimize for variance reduction. In our experiments, the overall pipeline, dubbed \emph{SteinDreamer}, is implemented by instantiating the control variate with a monocular depth estimator. The results show that SSD can effectively reduce the distillation variance and consistently improve visual quality for both object- and scene-level generation.
APA
Wang, P., Fan, Z., Xu, D., Wang, D., Mohan, S., Iandola, F., Ranjan, R., Li, Y., Liu, Q., Wang, Z. & Chandra, V.. (2025). SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4024-4032 Available from https://proceedings.mlr.press/v258/wang25j.html.

Related Material