Principal Variety Analysis
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:97-108, 2017.
We introduce a novel computational framework, Principal Variety Analysis (PVA), for primarily nonlinear data modeling. PVA accommodates algebraic sets as the target subspace through which limitations of other existing approaches is dealt with. PVA power is demonstrated in this paper for learning kinematics of objects, as an important application. PVA takes recorded coordinates of some pre-specified features on the objects as input and outputs a lowest dimensional variety on which the feature coordinates jointly lie. Unlike existing object modeling methods, which require entire trajectories of objects, PVA requires much less information and provides more flexible and generalizable models, namely an analytical algebraic kinematic model of the objects, even in unstructured, uncertain environments. Moreover, it is not restricted to predetermined model templates and is capable of extracting much more general types of models. Besides finding the kinematic model of objects, PVA can be a powerful tool to estimate their corresponding degrees of freedom. PVA computational success depends on exploiting sparsity, in particular algebraic dimension minimization through replacement of intractable $\ell_0$ norm (rank) with tractable $\ell_1$ norm (nuclear norm). Complete characterization of the assumptions under which $\ell_0$ and $\ell_1$ norm minimizations yield virtually the same outcome is introduced as an important open problem in this paper.