IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation

Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:35310-35323, 2024.

Abstract

Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100$\times$, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-melas-kyriazi24a, title = {{IM}-3{D}: Iterative Multiview Diffusion and Reconstruction for High-Quality 3{D} Generation}, author = {Melas-Kyriazi, Luke and Laina, Iro and Rupprecht, Christian and Neverova, Natalia and Vedaldi, Andrea and Gafni, Oran and Kokkinos, Filippos}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {35310--35323}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/melas-kyriazi24a/melas-kyriazi24a.pdf}, url = {https://proceedings.mlr.press/v235/melas-kyriazi24a.html}, abstract = {Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100$\times$, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.} }
Endnote
%0 Conference Paper %T IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation %A Luke Melas-Kyriazi %A Iro Laina %A Christian Rupprecht %A Natalia Neverova %A Andrea Vedaldi %A Oran Gafni %A Filippos Kokkinos %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-melas-kyriazi24a %I PMLR %P 35310--35323 %U https://proceedings.mlr.press/v235/melas-kyriazi24a.html %V 235 %X Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100$\times$, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.
APA
Melas-Kyriazi, L., Laina, I., Rupprecht, C., Neverova, N., Vedaldi, A., Gafni, O. & Kokkinos, F.. (2024). IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:35310-35323 Available from https://proceedings.mlr.press/v235/melas-kyriazi24a.html.

Related Material