A Machine Learning Approach for Predicting Upper Limb Motion Intentions with Multimodal Data

Pavan Uttej Ravva, Pinar Kullu, Mohammad Fahim Abrar, Roghayeh Leila Barmaki
Proceedings of the fifth Conference on Health, Inference, and Learning, PMLR 248:169-181, 2024.

Abstract

Over the last decade, there has been significant progress in the field of interactive virtual rehabilitation. Physical therapy (PT) stands as a highly effective approach for enhancing physical impairments. However, patient motivation and progress tracking in rehabilitation outcomes remain a challenge. This work addresses the gap through a machine learning-based approach to objectively measure outcomes of the upper limb virtual therapy system in a user study with non-clinical participants. In this study, we use virtual reality to perform several tracing tasks while collecting motion and movement data using a KinArm robot and a custom-made wearable sleeve sensor. We introduce a two-step machine learning architecture to predict the motion intention of participants. The first step predicts \textbf{reaching task segments} to which the participant-marked points belonged using gaze, while the second step employs a Long Short-Term Memory (LSTM) model to predict \textbf{directional movements} based on resistance change values from the wearable sensor and the KinArm. We specifically propose to transpose our raw resistance data to the time-domain which significantly improves the accuracy of the models by 34.6%. To evaluate the effectiveness of our model, we compared different classification techniques with various data configurations. The results show that our proposed computational method is exceptional at predicting participant’s actions with accuracy values of 96.72% for diamond reaching task, and 97.44% for circle reaching task, which demonstrates the great promise of using multimodal data, including eye-tracking and resistance change, to objectively measure the performance and intention in virtual rehabilitation settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v248-ravva24a, title = {A Machine Learning Approach for Predicting Upper Limb Motion Intentions with Multimodal Data}, author = {Ravva, Pavan Uttej and Kullu, Pinar and Abrar, Mohammad Fahim and Barmaki, Roghayeh Leila}, booktitle = {Proceedings of the fifth Conference on Health, Inference, and Learning}, pages = {169--181}, year = {2024}, editor = {Pollard, Tom and Choi, Edward and Singhal, Pankhuri and Hughes, Michael and Sizikova, Elena and Mortazavi, Bobak and Chen, Irene and Wang, Fei and Sarker, Tasmie and McDermott, Matthew and Ghassemi, Marzyeh}, volume = {248}, series = {Proceedings of Machine Learning Research}, month = {27--28 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v248/main/assets/ravva24a/ravva24a.pdf}, url = {https://proceedings.mlr.press/v248/ravva24a.html}, abstract = {Over the last decade, there has been significant progress in the field of interactive virtual rehabilitation. Physical therapy (PT) stands as a highly effective approach for enhancing physical impairments. However, patient motivation and progress tracking in rehabilitation outcomes remain a challenge. This work addresses the gap through a machine learning-based approach to objectively measure outcomes of the upper limb virtual therapy system in a user study with non-clinical participants. In this study, we use virtual reality to perform several tracing tasks while collecting motion and movement data using a KinArm robot and a custom-made wearable sleeve sensor. We introduce a two-step machine learning architecture to predict the motion intention of participants. The first step predicts \textbf{reaching task segments} to which the participant-marked points belonged using gaze, while the second step employs a Long Short-Term Memory (LSTM) model to predict \textbf{directional movements} based on resistance change values from the wearable sensor and the KinArm. We specifically propose to transpose our raw resistance data to the time-domain which significantly improves the accuracy of the models by 34.6%. To evaluate the effectiveness of our model, we compared different classification techniques with various data configurations. The results show that our proposed computational method is exceptional at predicting participant’s actions with accuracy values of 96.72% for diamond reaching task, and 97.44% for circle reaching task, which demonstrates the great promise of using multimodal data, including eye-tracking and resistance change, to objectively measure the performance and intention in virtual rehabilitation settings.} }
Endnote
%0 Conference Paper %T A Machine Learning Approach for Predicting Upper Limb Motion Intentions with Multimodal Data %A Pavan Uttej Ravva %A Pinar Kullu %A Mohammad Fahim Abrar %A Roghayeh Leila Barmaki %B Proceedings of the fifth Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2024 %E Tom Pollard %E Edward Choi %E Pankhuri Singhal %E Michael Hughes %E Elena Sizikova %E Bobak Mortazavi %E Irene Chen %E Fei Wang %E Tasmie Sarker %E Matthew McDermott %E Marzyeh Ghassemi %F pmlr-v248-ravva24a %I PMLR %P 169--181 %U https://proceedings.mlr.press/v248/ravva24a.html %V 248 %X Over the last decade, there has been significant progress in the field of interactive virtual rehabilitation. Physical therapy (PT) stands as a highly effective approach for enhancing physical impairments. However, patient motivation and progress tracking in rehabilitation outcomes remain a challenge. This work addresses the gap through a machine learning-based approach to objectively measure outcomes of the upper limb virtual therapy system in a user study with non-clinical participants. In this study, we use virtual reality to perform several tracing tasks while collecting motion and movement data using a KinArm robot and a custom-made wearable sleeve sensor. We introduce a two-step machine learning architecture to predict the motion intention of participants. The first step predicts \textbf{reaching task segments} to which the participant-marked points belonged using gaze, while the second step employs a Long Short-Term Memory (LSTM) model to predict \textbf{directional movements} based on resistance change values from the wearable sensor and the KinArm. We specifically propose to transpose our raw resistance data to the time-domain which significantly improves the accuracy of the models by 34.6%. To evaluate the effectiveness of our model, we compared different classification techniques with various data configurations. The results show that our proposed computational method is exceptional at predicting participant’s actions with accuracy values of 96.72% for diamond reaching task, and 97.44% for circle reaching task, which demonstrates the great promise of using multimodal data, including eye-tracking and resistance change, to objectively measure the performance and intention in virtual rehabilitation settings.
APA
Ravva, P.U., Kullu, P., Abrar, M.F. & Barmaki, R.L.. (2024). A Machine Learning Approach for Predicting Upper Limb Motion Intentions with Multimodal Data. Proceedings of the fifth Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 248:169-181 Available from https://proceedings.mlr.press/v248/ravva24a.html.

Related Material