EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features

Vasudev Singh, Chaitanya Langde, Sourav Lakotia, Vignesh Kannan, Shuaib Ahmed
Proceedings of The 2nd Gaze Meets ML workshop, PMLR 226:219-235, 2024.

Abstract

Accurate gaze estimation is integral to a myriad of applications, from augmented reality to non-verbal communication analysis. However, the performance of gaze estimation models is often compromised by adverse conditions such as poor lighting, artifacts, low-resolution imagery, etc. To counter these challenges, we introduce the eye gaze estimation with self- improving features (EG-SIF) method, a novel approach that enhances model robustness and performance in suboptimal conditions. The EG-SIF method innovatively segregates eye images by quality, synthesizing pairs of high-quality and corresponding degraded images. It leverages a multitask training paradigm that emphasizes image enhancement through reconstruction from impaired versions. This strategy is not only pioneering in the realm of data segregation based on image quality but also introduces a transformative multitask framework that integrates image enhancement as an auxiliary task. We implement adaptive binning and mixed regression with intermediate supervision to refine capability of our model further. Empirical evidence demonstrates that our EG-SIF method significantly reduces the angular error in gaze estimation on challenging datasets such as MPIIGaze, improving from 4.64◦ to 4.53◦, and on RTGene, from 7.44◦ to 7.41◦, thereby setting a new benchmark in the field. Our contributions lay the foundation for future eye appearance based gaze estimation models that can operate reliably despite the presence of image quality adversities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v226-singh24a, title = {EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features}, author = {Singh, Vasudev and Langde, Chaitanya and Lakotia, Sourav and Kannan, Vignesh and Ahmed, Shuaib}, booktitle = {Proceedings of The 2nd Gaze Meets ML workshop}, pages = {219--235}, year = {2024}, editor = {Madu Blessing, Amarachi and Wu, Joy and Zario, Danca and Krupinski, Elizabeth and Kashyap, Satyananda and Karargyris, Alexandros}, volume = {226}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v226/singh24a/singh24a.pdf}, url = {https://proceedings.mlr.press/v226/singh24a.html}, abstract = {Accurate gaze estimation is integral to a myriad of applications, from augmented reality to non-verbal communication analysis. However, the performance of gaze estimation models is often compromised by adverse conditions such as poor lighting, artifacts, low-resolution imagery, etc. To counter these challenges, we introduce the eye gaze estimation with self- improving features (EG-SIF) method, a novel approach that enhances model robustness and performance in suboptimal conditions. The EG-SIF method innovatively segregates eye images by quality, synthesizing pairs of high-quality and corresponding degraded images. It leverages a multitask training paradigm that emphasizes image enhancement through reconstruction from impaired versions. This strategy is not only pioneering in the realm of data segregation based on image quality but also introduces a transformative multitask framework that integrates image enhancement as an auxiliary task. We implement adaptive binning and mixed regression with intermediate supervision to refine capability of our model further. Empirical evidence demonstrates that our EG-SIF method significantly reduces the angular error in gaze estimation on challenging datasets such as MPIIGaze, improving from 4.64◦ to 4.53◦, and on RTGene, from 7.44◦ to 7.41◦, thereby setting a new benchmark in the field. Our contributions lay the foundation for future eye appearance based gaze estimation models that can operate reliably despite the presence of image quality adversities.} }
Endnote
%0 Conference Paper %T EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features %A Vasudev Singh %A Chaitanya Langde %A Sourav Lakotia %A Vignesh Kannan %A Shuaib Ahmed %B Proceedings of The 2nd Gaze Meets ML workshop %C Proceedings of Machine Learning Research %D 2024 %E Amarachi Madu Blessing %E Joy Wu %E Danca Zario %E Elizabeth Krupinski %E Satyananda Kashyap %E Alexandros Karargyris %F pmlr-v226-singh24a %I PMLR %P 219--235 %U https://proceedings.mlr.press/v226/singh24a.html %V 226 %X Accurate gaze estimation is integral to a myriad of applications, from augmented reality to non-verbal communication analysis. However, the performance of gaze estimation models is often compromised by adverse conditions such as poor lighting, artifacts, low-resolution imagery, etc. To counter these challenges, we introduce the eye gaze estimation with self- improving features (EG-SIF) method, a novel approach that enhances model robustness and performance in suboptimal conditions. The EG-SIF method innovatively segregates eye images by quality, synthesizing pairs of high-quality and corresponding degraded images. It leverages a multitask training paradigm that emphasizes image enhancement through reconstruction from impaired versions. This strategy is not only pioneering in the realm of data segregation based on image quality but also introduces a transformative multitask framework that integrates image enhancement as an auxiliary task. We implement adaptive binning and mixed regression with intermediate supervision to refine capability of our model further. Empirical evidence demonstrates that our EG-SIF method significantly reduces the angular error in gaze estimation on challenging datasets such as MPIIGaze, improving from 4.64◦ to 4.53◦, and on RTGene, from 7.44◦ to 7.41◦, thereby setting a new benchmark in the field. Our contributions lay the foundation for future eye appearance based gaze estimation models that can operate reliably despite the presence of image quality adversities.
APA
Singh, V., Langde, C., Lakotia, S., Kannan, V. & Ahmed, S.. (2024). EG-SIF: Improving Appearance Based Gaze Estimation using Self Improving Features. Proceedings of The 2nd Gaze Meets ML workshop, in Proceedings of Machine Learning Research 226:219-235 Available from https://proceedings.mlr.press/v226/singh24a.html.

Related Material