Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance

Abhijat Biswas, Pranay Gupta, Shreeya Khurana, David Held, Henny Admoni
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3551-3567, 2025.

Abstract

Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers’ situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts. Moreover, collecting the data to train such an SA model is challenging: being an internal human cognitive state, driver SA is difficult to measure, and non-verbal signals such as eye gaze are some of the only outward manifestations of it. Traditional methods to obtain SA labels rely on probes that result in sparse, intermittent SA labels unsuitable for modeling a dense, temporally correlated process via machine learning. We propose a novel interactive labeling protocol that captures dense, continuous SA labels and use it to collect an object-level SA dataset in a VR driving simulator. Our dataset comprises 20 unique drivers’ SA labels, driving data, and gaze (over 320 minutes of driving) which will be made public. Additionally, we train an SA model from this data, formulating the object-level driver SA prediction problem as a semantic segmentation problem. Our formulation allows all objects in a scene at a timestep to be processed simultaneously, leveraging global scene context and local gaze-object relationships together. Our experiments show that this formulation leads to improved performance over common sense baselines and prior art on the SA prediction task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-biswas25a, title = {Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance}, author = {Biswas, Abhijat and Gupta, Pranay and Khurana, Shreeya and Held, David and Admoni, Henny}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3551--3567}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/biswas25a/biswas25a.pdf}, url = {https://proceedings.mlr.press/v270/biswas25a.html}, abstract = {Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers’ situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts. Moreover, collecting the data to train such an SA model is challenging: being an internal human cognitive state, driver SA is difficult to measure, and non-verbal signals such as eye gaze are some of the only outward manifestations of it. Traditional methods to obtain SA labels rely on probes that result in sparse, intermittent SA labels unsuitable for modeling a dense, temporally correlated process via machine learning. We propose a novel interactive labeling protocol that captures dense, continuous SA labels and use it to collect an object-level SA dataset in a VR driving simulator. Our dataset comprises 20 unique drivers’ SA labels, driving data, and gaze (over 320 minutes of driving) which will be made public. Additionally, we train an SA model from this data, formulating the object-level driver SA prediction problem as a semantic segmentation problem. Our formulation allows all objects in a scene at a timestep to be processed simultaneously, leveraging global scene context and local gaze-object relationships together. Our experiments show that this formulation leads to improved performance over common sense baselines and prior art on the SA prediction task.} }
Endnote
%0 Conference Paper %T Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance %A Abhijat Biswas %A Pranay Gupta %A Shreeya Khurana %A David Held %A Henny Admoni %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-biswas25a %I PMLR %P 3551--3567 %U https://proceedings.mlr.press/v270/biswas25a.html %V 270 %X Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers’ situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts. Moreover, collecting the data to train such an SA model is challenging: being an internal human cognitive state, driver SA is difficult to measure, and non-verbal signals such as eye gaze are some of the only outward manifestations of it. Traditional methods to obtain SA labels rely on probes that result in sparse, intermittent SA labels unsuitable for modeling a dense, temporally correlated process via machine learning. We propose a novel interactive labeling protocol that captures dense, continuous SA labels and use it to collect an object-level SA dataset in a VR driving simulator. Our dataset comprises 20 unique drivers’ SA labels, driving data, and gaze (over 320 minutes of driving) which will be made public. Additionally, we train an SA model from this data, formulating the object-level driver SA prediction problem as a semantic segmentation problem. Our formulation allows all objects in a scene at a timestep to be processed simultaneously, leveraging global scene context and local gaze-object relationships together. Our experiments show that this formulation leads to improved performance over common sense baselines and prior art on the SA prediction task.
APA
Biswas, A., Gupta, P., Khurana, S., Held, D. & Admoni, H.. (2025). Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3551-3567 Available from https://proceedings.mlr.press/v270/biswas25a.html.

Related Material