Implicit Geometric Regularization for Learning Shapes

Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, Yaron Lipman
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3789-3799, 2020.

Abstract

Representing shapes as level-sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level-sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level-set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state-of-the-art implicit neural representations with higher level-of-details and fidelity compared to previous methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-gropp20a, title = {Implicit Geometric Regularization for Learning Shapes}, author = {Gropp, Amos and Yariv, Lior and Haim, Niv and Atzmon, Matan and Lipman, Yaron}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3789--3799}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/gropp20a/gropp20a.pdf}, url = {https://proceedings.mlr.press/v119/gropp20a.html}, abstract = {Representing shapes as level-sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level-sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level-set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state-of-the-art implicit neural representations with higher level-of-details and fidelity compared to previous methods.} }
Endnote
%0 Conference Paper %T Implicit Geometric Regularization for Learning Shapes %A Amos Gropp %A Lior Yariv %A Niv Haim %A Matan Atzmon %A Yaron Lipman %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-gropp20a %I PMLR %P 3789--3799 %U https://proceedings.mlr.press/v119/gropp20a.html %V 119 %X Representing shapes as level-sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level-sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level-set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state-of-the-art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
APA
Gropp, A., Yariv, L., Haim, N., Atzmon, M. & Lipman, Y.. (2020). Implicit Geometric Regularization for Learning Shapes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3789-3799 Available from https://proceedings.mlr.press/v119/gropp20a.html.

Related Material