[edit]
From Surface to Viscera: 3D Estimation of Internal Anatomy from Body Surface Point Clouds
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2666-2681, 2026.
Abstract
Accurate pre-scan positioning in diagnostic imaging is essential for guiding acquisition and reducing manual calibration time, yet current automated approaches typically rely on dense volumetric representations that are not leveraging the geometric properties or sparsity of surface representations. In this work, we introduce a sparse, point-cloud–based framework for estimating patient-specific 3D locations and shapes of multiple internal organs directly from the body surface. Our method leverages a new dual-encoder PointTransformer architecture: one encoder processes a mean-shape point cloud comprising 20 anatomical structures, while a second encoder extracts features from the patient’s body-surface point cloud. A shared decoder then predicts a deformed shape estimating the hidden individual anatomy patient. This enables accurate organ localization without volumetric rasterization or autoencoder-style bottlenecks. Trained on the German National Cohort (NAKO) dataset, our model substantially outperforms volumetric convolutional autoencoder (CAE) baselines, achieving a mean Chamfer Distance less than 5 mm and markedly lower surface-distance errors. These results demonstrate that sparse geometric learning with deformable point-cloud priors offers an efficient and highly effective alternative improving over dense convolutional deep learning methods for automated imaging workflow optimization.