Staying in shape: learning invariant shape representations using contrastive learning

Jeffrey Gu, Serena Yeung
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1852-1862, 2021.

Abstract

Creating representations of shapes that are invariant to isometric or almost-isometric transformations has long been an area of interest in shape analysis, since enforcing invariance allows the learning of more effective and robust shape representations. Most existing invariant shape representations are handcrafted, and previous work on learning shape representations do not focus on producing invariant representations. To solve the problem of learning unsupervised invariant shape representations, we use contrastive learning, which produces discriminative representations through learning invariance to user-specified data augmentations. To produce representations that are specifically isometry and almost-isometry invariant, we propose new data augmentations that randomly sample these transformations. We show experimentally that our method outperforms previous unsupervised learning approaches in both effectiveness and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-gu21a, title = {Staying in shape: learning invariant shape representations using contrastive learning}, author = {Gu, Jeffrey and Yeung, Serena}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1852--1862}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/gu21a/gu21a.pdf}, url = {https://proceedings.mlr.press/v161/gu21a.html}, abstract = {Creating representations of shapes that are invariant to isometric or almost-isometric transformations has long been an area of interest in shape analysis, since enforcing invariance allows the learning of more effective and robust shape representations. Most existing invariant shape representations are handcrafted, and previous work on learning shape representations do not focus on producing invariant representations. To solve the problem of learning unsupervised invariant shape representations, we use contrastive learning, which produces discriminative representations through learning invariance to user-specified data augmentations. To produce representations that are specifically isometry and almost-isometry invariant, we propose new data augmentations that randomly sample these transformations. We show experimentally that our method outperforms previous unsupervised learning approaches in both effectiveness and robustness.} }
Endnote
%0 Conference Paper %T Staying in shape: learning invariant shape representations using contrastive learning %A Jeffrey Gu %A Serena Yeung %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-gu21a %I PMLR %P 1852--1862 %U https://proceedings.mlr.press/v161/gu21a.html %V 161 %X Creating representations of shapes that are invariant to isometric or almost-isometric transformations has long been an area of interest in shape analysis, since enforcing invariance allows the learning of more effective and robust shape representations. Most existing invariant shape representations are handcrafted, and previous work on learning shape representations do not focus on producing invariant representations. To solve the problem of learning unsupervised invariant shape representations, we use contrastive learning, which produces discriminative representations through learning invariance to user-specified data augmentations. To produce representations that are specifically isometry and almost-isometry invariant, we propose new data augmentations that randomly sample these transformations. We show experimentally that our method outperforms previous unsupervised learning approaches in both effectiveness and robustness.
APA
Gu, J. & Yeung, S.. (2021). Staying in shape: learning invariant shape representations using contrastive learning. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1852-1862 Available from https://proceedings.mlr.press/v161/gu21a.html.

Related Material