Title : Machine learning-based arm movement identification: A generalization study from structured to semi-structured environments
Abstract:
This presentation will explore how machine learning and wearable sensor technologies can support home-based Upper Extremity (UE) rehabilitation for stroke survivors by enabling accurate movement identification across diverse environments and conditions.
To optimally improve motor recovery, increasing the intensity of rehabilitation is essential, as higher intensity is associated with better functional outcomes. In this context, accurately identifying the intensity and type of upper extremity (UE) movements during activities of daily living (ADLs) provides valuable clinical insight into recovery progression in home settings. While wearable devices equipped with Machine Learning (ML) algorithms have shown promise in arm movement identification, most ML models are evaluated in highly controlled, structured laboratory environments. This raises concerns about their ability to generalize to the more variable and realistic conditions encountered in daily life—an essential requirement for practical, real-world deployment.
We presented a study investigating the generalization capabilities of two machine learning models—Random Forest (RF) and a hybrid deep learning model combining convolutional and recurrent neural networks—for identifying specific arm movements across different environmental contexts, using separate datasets for training and testing. In a pilot study, we included twelve healthy participants who performed three types of arm movements (reaching, lifting, and pronation/supination) in both a structured lab setting and a semi-structured kitchen environment. We evaluated two sensor configurations: one using four Inertial Measurement Units (IMUs) placed along the arm from hand to shoulder, and another using a single IMU worn on the wrist to explore the feasibility of movement identification in a watch-like setup.
Our results showed that both models, when trained on structured trials, were able to accurately predict arm movements during more realistic, semi-structured kitchen trials. Specifically, the Random Forest (RF) classifier achieved balanced accuracies of 86.54% using the subject-specific approach (training and testing on data from the same subject) and 77.37% using the group approach (training on data from all subjects and testing on a randomly chosen target subject), based on the four-sensor configuration. The hybrid deep learning model performed slightly better, achieving 87.96% and 82.96% accuracy for the subject-specific and group approaches, respectively. Even with a reduced sensor setup—using only a single wrist-worn IMU—both models maintained reasonable accuracy, with the RF classifier demonstrating greater robustness to sensor reduction. Conversely, the hybrid model proved more effective in scenarios with limited subject-specific training data, still showing strong performance in identifying kitchen activities when trained on movement data collected from other individuals.
These findings demonstrate that robust movement identification is possible across different environments using minimal wearable hardware, supporting the feasibility of deploying machine learning-based rehabilitation tools in home settings. This work lays the foundation for accessible stroke rehabilitation solutions that can adapt to the complexities of real-life scenarios outside the lab.