3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities

Bingbin Liu, Michelle Guo, Edward Chou, Rishab Mehra, Serena Yeung, N. Lance Downing, Francesca Salipur, Jeffrey Jopling, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei
Proceedings of the 3rd Machine Learning for Healthcare Conference, PMLR 85:17-29, 2018.

Abstract

Intensive Care Units (ICUs) are some of the highest intensity areas of patient care activities in hospitals, yet documentation and understanding of the occurrence of these activities remains sub-optimal due in part to already-demanding patient care workloads of nursing staff. Recently, computer vision based methods operating over color and depth data collected from passive mounted sensors have been developed for automated activity recognition, but have been limited to coarse or simple activities due to the complex environments in ICUs, where fast-changing activities and severe occlusion occurs. In this work, we introduce an approach for tackling more challenging activities in ICUs by combining depth data from multiple sensors to form a single 3D point cloud representation, and using a neural network-based model to reason over this 3D representation. We demonstrate the effectiveness of this approach using a dataset of mobility-related patient care activities collected in a clinician-guided simulation setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v85-liu18a, title = {3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities}, author = {Liu, Bingbin and Guo, Michelle and Chou, Edward and Mehra, Rishab and Yeung, Serena and Downing, N. Lance and Salipur, Francesca and Jopling, Jeffrey and Campbell, Brandi and Deru, Kayla and Beninati, William and Milstein, Arnold and Fei-Fei, Li}, booktitle = {Proceedings of the 3rd Machine Learning for Healthcare Conference}, pages = {17--29}, year = {2018}, editor = {Doshi-Velez, Finale and Fackler, Jim and Jung, Ken and Kale, David and Ranganath, Rajesh and Wallace, Byron and Wiens, Jenna}, volume = {85}, series = {Proceedings of Machine Learning Research}, month = {17--18 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v85/liu18a/liu18a.pdf}, url = {https://proceedings.mlr.press/v85/liu18a.html}, abstract = {Intensive Care Units (ICUs) are some of the highest intensity areas of patient care activities in hospitals, yet documentation and understanding of the occurrence of these activities remains sub-optimal due in part to already-demanding patient care workloads of nursing staff. Recently, computer vision based methods operating over color and depth data collected from passive mounted sensors have been developed for automated activity recognition, but have been limited to coarse or simple activities due to the complex environments in ICUs, where fast-changing activities and severe occlusion occurs. In this work, we introduce an approach for tackling more challenging activities in ICUs by combining depth data from multiple sensors to form a single 3D point cloud representation, and using a neural network-based model to reason over this 3D representation. We demonstrate the effectiveness of this approach using a dataset of mobility-related patient care activities collected in a clinician-guided simulation setting.} }
Endnote
%0 Conference Paper %T 3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities %A Bingbin Liu %A Michelle Guo %A Edward Chou %A Rishab Mehra %A Serena Yeung %A N. Lance Downing %A Francesca Salipur %A Jeffrey Jopling %A Brandi Campbell %A Kayla Deru %A William Beninati %A Arnold Milstein %A Li Fei-Fei %B Proceedings of the 3rd Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2018 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v85-liu18a %I PMLR %P 17--29 %U https://proceedings.mlr.press/v85/liu18a.html %V 85 %X Intensive Care Units (ICUs) are some of the highest intensity areas of patient care activities in hospitals, yet documentation and understanding of the occurrence of these activities remains sub-optimal due in part to already-demanding patient care workloads of nursing staff. Recently, computer vision based methods operating over color and depth data collected from passive mounted sensors have been developed for automated activity recognition, but have been limited to coarse or simple activities due to the complex environments in ICUs, where fast-changing activities and severe occlusion occurs. In this work, we introduce an approach for tackling more challenging activities in ICUs by combining depth data from multiple sensors to form a single 3D point cloud representation, and using a neural network-based model to reason over this 3D representation. We demonstrate the effectiveness of this approach using a dataset of mobility-related patient care activities collected in a clinician-guided simulation setting.
APA
Liu, B., Guo, M., Chou, E., Mehra, R., Yeung, S., Downing, N.L., Salipur, F., Jopling, J., Campbell, B., Deru, K., Beninati, W., Milstein, A. & Fei-Fei, L.. (2018). 3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities. Proceedings of the 3rd Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 85:17-29 Available from https://proceedings.mlr.press/v85/liu18a.html.

Related Material