[edit]
Distributional Depth-Based Estimation of Object Articulation Models
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1611-1621, 2022.
Abstract
We propose a method that efficiently learns distributions over articulation models directly from depth images without the need to know articulation model categories a priori. By contrast, existing methods that learn articulation models from raw observations require objects to be textured, and most only predict point estimates of the model parameters. Our core contributions include a novel representation for distributions over rigid body transformations and articulation model parameters based on Screw theory, von Mises-Fisher distributions and Stiefel manifolds. Combining these concepts allows for an efficient, mathematically sound representation that inherently satisfies several constraints that rigid body transformations and articulations must adhere to. In addition, we introduce a novel deep-learning based approach, DUST-net, that efficiently learns such distributions and, hence, performs category-independent articulation model estimation while also providing model uncertainties. We evaluate our approach on two benchmarking datasets and three real-world objects and compare its performance with two current state-of-the-art methods. Our results demonstrate that DUST-net can successfully learn distributions over articulation models for novel objects across articulation model categories, which generate point estimates with better accuracy than state-of-the-art methods and effectively capture the uncertainty over predicted model parameters due to noisy inputs. [webpage]