Contrastive Representation Learning for Gaze Estimation

Swati Jindal, Roberto Manduchi
Proceedings of The 1st Gaze Meets ML workshop, PMLR 210:37-49, 2023.

Abstract

Self-supervised learning (SSL) has become prevalent for learning representations in computer vision. Notably, SSL exploits contrastive learning to encourage visual represen- tations to be invariant under various image transformations. The task of gaze estimation, on the other hand, demands not just invariance to various appearances but also equiv- ariance to the geometric transformations. In this work, we propose a simple contrastive representation learning framework for gaze estimation, named Gaze Contrastive Learning (GazeCLR). GazeCLR exploits multi-view data to promote equivariance and relies on selected data augmentation techniques that do not alter gaze directions for invariance learning. Our experiments demonstrate the effectiveness of GazeCLR for several settings of the gaze estimation task. Particularly, our results show that GazeCLR improves the performance of cross-domain gaze estimation and yields as high as 17.2% relative improve- ment. Moreover, the GazeCLR framework is competitive with state-of-the-art representation learning methods for few-shot evaluation. The code and pre-trained models are available at https://github.com/jswati31/gazeclr.

Cite this Paper


BibTeX
@InProceedings{pmlr-v210-jindal23a, title = {Contrastive Representation Learning for Gaze Estimation}, author = {Jindal, Swati and Manduchi, Roberto}, booktitle = {Proceedings of The 1st Gaze Meets ML workshop}, pages = {37--49}, year = {2023}, editor = {Lourentzou, Ismini and Wu, Joy and Kashyap, Satyananda and Karargyris, Alexandros and Celi, Leo Anthony and Kawas, Ban and Talathi, Sachin}, volume = {210}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v210/jindal23a/jindal23a.pdf}, url = {https://proceedings.mlr.press/v210/jindal23a.html}, abstract = {Self-supervised learning (SSL) has become prevalent for learning representations in computer vision. Notably, SSL exploits contrastive learning to encourage visual represen- tations to be invariant under various image transformations. The task of gaze estimation, on the other hand, demands not just invariance to various appearances but also equiv- ariance to the geometric transformations. In this work, we propose a simple contrastive representation learning framework for gaze estimation, named Gaze Contrastive Learning (GazeCLR). GazeCLR exploits multi-view data to promote equivariance and relies on selected data augmentation techniques that do not alter gaze directions for invariance learning. Our experiments demonstrate the effectiveness of GazeCLR for several settings of the gaze estimation task. Particularly, our results show that GazeCLR improves the performance of cross-domain gaze estimation and yields as high as 17.2% relative improve- ment. Moreover, the GazeCLR framework is competitive with state-of-the-art representation learning methods for few-shot evaluation. The code and pre-trained models are available at https://github.com/jswati31/gazeclr.} }
Endnote
%0 Conference Paper %T Contrastive Representation Learning for Gaze Estimation %A Swati Jindal %A Roberto Manduchi %B Proceedings of The 1st Gaze Meets ML workshop %C Proceedings of Machine Learning Research %D 2023 %E Ismini Lourentzou %E Joy Wu %E Satyananda Kashyap %E Alexandros Karargyris %E Leo Anthony Celi %E Ban Kawas %E Sachin Talathi %F pmlr-v210-jindal23a %I PMLR %P 37--49 %U https://proceedings.mlr.press/v210/jindal23a.html %V 210 %X Self-supervised learning (SSL) has become prevalent for learning representations in computer vision. Notably, SSL exploits contrastive learning to encourage visual represen- tations to be invariant under various image transformations. The task of gaze estimation, on the other hand, demands not just invariance to various appearances but also equiv- ariance to the geometric transformations. In this work, we propose a simple contrastive representation learning framework for gaze estimation, named Gaze Contrastive Learning (GazeCLR). GazeCLR exploits multi-view data to promote equivariance and relies on selected data augmentation techniques that do not alter gaze directions for invariance learning. Our experiments demonstrate the effectiveness of GazeCLR for several settings of the gaze estimation task. Particularly, our results show that GazeCLR improves the performance of cross-domain gaze estimation and yields as high as 17.2% relative improve- ment. Moreover, the GazeCLR framework is competitive with state-of-the-art representation learning methods for few-shot evaluation. The code and pre-trained models are available at https://github.com/jswati31/gazeclr.
APA
Jindal, S. & Manduchi, R.. (2023). Contrastive Representation Learning for Gaze Estimation. Proceedings of The 1st Gaze Meets ML workshop, in Proceedings of Machine Learning Research 210:37-49 Available from https://proceedings.mlr.press/v210/jindal23a.html.

Related Material