{"title":"学生竞赛(科技创新) ID 1972228","authors":"Nicholas Zhao, J. Zariffa","doi":"10.46292/sci23-1972228s","DOIUrl":null,"url":null,"abstract":"Upper limb rehabilitation after cervical spinal cord injury is vital for regaining independence. Hand function assessments are critical for upper limb rehabilitation, but are unable to accurately reflect performance in the individual’s home environment. Video from wearable cameras (egocentric video), paired with deep learning, can assess hand function during activities of daily living (ADLs) at home. Previous studies have focused on analyzing quantitative aspects of hand usage, but there has yet to be a study that uses deep neural networks to assess the quality of hand usage from egocentric video. To train a deep neural network to predict hand function assessment scores from egocentric video. The dataset used contained egocentric videos of ADLs performed by 17 participants with AIS grades from A-D in a home simulation laboratory. Tasks were annotated with scores adapted from the Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP). The annotated video was then used to train and validate a SlowFast neural network to predict GRASSP scores, using leave-one-subject-out cross validation. Model performance was evaluated by mean absolute error, accuracy, and F1 score. The model was optimized with a hyperparameter sweep. The top performing model demonstrated a mean absolute error of 0.52±0.19, an accuracy of 0.55±0.14, and F1 score of 0.55±0.16, on an ordinal scale from 1 to 5. These results demonstrate that automated assessment of hand function is possible by applying deep learning to egocentric video. Future work should expand the model to larger datasets with more variability.","PeriodicalId":46769,"journal":{"name":"Topics in Spinal Cord Injury Rehabilitation","volume":"15 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Student Competition (Technology Innovation) ID 1972228\",\"authors\":\"Nicholas Zhao, J. Zariffa\",\"doi\":\"10.46292/sci23-1972228s\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Upper limb rehabilitation after cervical spinal cord injury is vital for regaining independence. Hand function assessments are critical for upper limb rehabilitation, but are unable to accurately reflect performance in the individual’s home environment. Video from wearable cameras (egocentric video), paired with deep learning, can assess hand function during activities of daily living (ADLs) at home. Previous studies have focused on analyzing quantitative aspects of hand usage, but there has yet to be a study that uses deep neural networks to assess the quality of hand usage from egocentric video. To train a deep neural network to predict hand function assessment scores from egocentric video. The dataset used contained egocentric videos of ADLs performed by 17 participants with AIS grades from A-D in a home simulation laboratory. Tasks were annotated with scores adapted from the Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP). The annotated video was then used to train and validate a SlowFast neural network to predict GRASSP scores, using leave-one-subject-out cross validation. Model performance was evaluated by mean absolute error, accuracy, and F1 score. The model was optimized with a hyperparameter sweep. The top performing model demonstrated a mean absolute error of 0.52±0.19, an accuracy of 0.55±0.14, and F1 score of 0.55±0.16, on an ordinal scale from 1 to 5. These results demonstrate that automated assessment of hand function is possible by applying deep learning to egocentric video. Future work should expand the model to larger datasets with more variability.\",\"PeriodicalId\":46769,\"journal\":{\"name\":\"Topics in Spinal Cord Injury Rehabilitation\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Topics in Spinal Cord Injury Rehabilitation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.46292/sci23-1972228s\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"REHABILITATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Topics in Spinal Cord Injury Rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.46292/sci23-1972228s","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REHABILITATION","Score":null,"Total":0}
Student Competition (Technology Innovation) ID 1972228
Upper limb rehabilitation after cervical spinal cord injury is vital for regaining independence. Hand function assessments are critical for upper limb rehabilitation, but are unable to accurately reflect performance in the individual’s home environment. Video from wearable cameras (egocentric video), paired with deep learning, can assess hand function during activities of daily living (ADLs) at home. Previous studies have focused on analyzing quantitative aspects of hand usage, but there has yet to be a study that uses deep neural networks to assess the quality of hand usage from egocentric video. To train a deep neural network to predict hand function assessment scores from egocentric video. The dataset used contained egocentric videos of ADLs performed by 17 participants with AIS grades from A-D in a home simulation laboratory. Tasks were annotated with scores adapted from the Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP). The annotated video was then used to train and validate a SlowFast neural network to predict GRASSP scores, using leave-one-subject-out cross validation. Model performance was evaluated by mean absolute error, accuracy, and F1 score. The model was optimized with a hyperparameter sweep. The top performing model demonstrated a mean absolute error of 0.52±0.19, an accuracy of 0.55±0.14, and F1 score of 0.55±0.16, on an ordinal scale from 1 to 5. These results demonstrate that automated assessment of hand function is possible by applying deep learning to egocentric video. Future work should expand the model to larger datasets with more variability.
期刊介绍:
Now in our 22nd year as the leading interdisciplinary journal of SCI rehabilitation techniques and care. TSCIR is peer-reviewed, practical, and features one key topic per issue. Published topics include: mobility, sexuality, genitourinary, functional assessment, skin care, psychosocial, high tetraplegia, physical activity, pediatric, FES, sci/tbi, electronic medicine, orthotics, secondary conditions, research, aging, legal issues, women & sci, pain, environmental effects, life care planning