{"title":"Multivideo Models for Classifying Hand Impairment After Stroke Using Egocentric Video","authors":"Anne Mei;Meng-Fen Tsai;José Zariffa","doi":"10.1109/TNSRE.2025.3596488","DOIUrl":null,"url":null,"abstract":"Objectives: After stroke, hand function assessments are used as outcome measures to evaluate new rehabilitation therapies, but do not reflect true performance in natural environments. Wearable (egocentric) cameras provide a way to capture hand function information during activities of daily living (ADLs). However, while clinical assessments involve observing multiple functional tasks, existing deep learning methods developed to analyze hands in egocentric video are only capable of considering single ADLs. This study presents a novel multi-video architecture that processes multiple task videos to make improved estimations about hand impairment. Methods: An egocentric video dataset of ADLs performed by stroke survivors in a home simulation lab was used to develop single and multi-input video models for binary impairment classification. Using SlowFast as a base feature extractor, late fusion (majority voting, fully-connected network) and intermediate fusion (concatenation, Markov chain) were investigated for building multi-video architectures. Results: Through evaluation with Leave-One-Participant-Out-Cross-Validation, using intermediate concatenation fusion to build multi-video models was found to achieve the best performance out of the fusion techniques. The resulting multi-video model for cropped inputs achieved an F1-score of <inline-formula> <tex-math>$0.778\\pm 0.129$ </tex-math></inline-formula> and significantly outperformed its single-video counterpart (F1-score of <inline-formula> <tex-math>$0.696\\pm 0.102$ </tex-math></inline-formula>). Similarly, the multi-video model for full-frame inputs (F1-score of <inline-formula> <tex-math>$0.796\\pm 0.102$ </tex-math></inline-formula>) significantly outperformed its single-video counterpart (F1-score of <inline-formula> <tex-math>$0.708\\pm 0.099$ </tex-math></inline-formula>). Conclusion: Multi-video architectures are beneficial for estimating hand impairment from egocentric video after stroke. Significance: The proposed deep learning solution is the first of its kind in multi-video analysis, and opens the door to further applications in automating other multi-observation assessments for clinical use.","PeriodicalId":13419,"journal":{"name":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","volume":"33 ","pages":"3303-3313"},"PeriodicalIF":5.2000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11115139","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Neural Systems and Rehabilitation Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11115139/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: After stroke, hand function assessments are used as outcome measures to evaluate new rehabilitation therapies, but do not reflect true performance in natural environments. Wearable (egocentric) cameras provide a way to capture hand function information during activities of daily living (ADLs). However, while clinical assessments involve observing multiple functional tasks, existing deep learning methods developed to analyze hands in egocentric video are only capable of considering single ADLs. This study presents a novel multi-video architecture that processes multiple task videos to make improved estimations about hand impairment. Methods: An egocentric video dataset of ADLs performed by stroke survivors in a home simulation lab was used to develop single and multi-input video models for binary impairment classification. Using SlowFast as a base feature extractor, late fusion (majority voting, fully-connected network) and intermediate fusion (concatenation, Markov chain) were investigated for building multi-video architectures. Results: Through evaluation with Leave-One-Participant-Out-Cross-Validation, using intermediate concatenation fusion to build multi-video models was found to achieve the best performance out of the fusion techniques. The resulting multi-video model for cropped inputs achieved an F1-score of $0.778\pm 0.129$ and significantly outperformed its single-video counterpart (F1-score of $0.696\pm 0.102$ ). Similarly, the multi-video model for full-frame inputs (F1-score of $0.796\pm 0.102$ ) significantly outperformed its single-video counterpart (F1-score of $0.708\pm 0.099$ ). Conclusion: Multi-video architectures are beneficial for estimating hand impairment from egocentric video after stroke. Significance: The proposed deep learning solution is the first of its kind in multi-video analysis, and opens the door to further applications in automating other multi-observation assessments for clinical use.
期刊介绍:
Rehabilitative and neural aspects of biomedical engineering, including functional electrical stimulation, acoustic dynamics, human performance measurement and analysis, nerve stimulation, electromyography, motor control and stimulation; and hardware and software applications for rehabilitation engineering and assistive devices.