Anway S Pimpalkar, A Michael West, Jing Xu, Jeremy D Brown
{"title":"Optimizing Cross-Modal Matching for Multimodal Motor Rehabilitation.","authors":"Anway S Pimpalkar, A Michael West, Jing Xu, Jeremy D Brown","doi":"10.1109/ICORR66766.2025.11063112","DOIUrl":null,"url":null,"abstract":"<p><p>Stroke often causes sensorimotor deficits, impairing hand dexterity and disrupting independence for millions worldwide. While rehabilitation devices leveraging visual and haptic feedback show promise, their effectiveness is limited by a lack of perceptual equity, which is necessary to ensure fair comparisons between sensory modalities. This study refines cross-modal matching protocols to address this gap, enabling unbiased evaluation of multimodal feedback. Using the Hand Articulation and Neurotraining Device (HAND), 12 healthy participants matched visual and haptic stimuli in a structured task. A streamlined protocol, requiring just $2-3$ blocks and 3 reference intensities, reduced experimental time fivefold while preserving data integrity. Data were analyzed using linear and exponential models applied to both full and reduced datasets. The results demonstrated consistent participant performance across trials, with higher matching errors at greater stimulus intensities, likely attributable to sensory saturation effects. Furthermore, the study offered practical methodological insights, including the use of reduced data sampling paradigms to enhance experimental efficiency significantly while preserving data integrity. This work advances perceptual equity in multisensory feedback systems, addressing sensory encoding variability to support scalable, personalized therapeutic strategies for stroke recovery.</p>","PeriodicalId":73276,"journal":{"name":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","volume":"2025 ","pages":"559-566"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR66766.2025.11063112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Stroke often causes sensorimotor deficits, impairing hand dexterity and disrupting independence for millions worldwide. While rehabilitation devices leveraging visual and haptic feedback show promise, their effectiveness is limited by a lack of perceptual equity, which is necessary to ensure fair comparisons between sensory modalities. This study refines cross-modal matching protocols to address this gap, enabling unbiased evaluation of multimodal feedback. Using the Hand Articulation and Neurotraining Device (HAND), 12 healthy participants matched visual and haptic stimuli in a structured task. A streamlined protocol, requiring just $2-3$ blocks and 3 reference intensities, reduced experimental time fivefold while preserving data integrity. Data were analyzed using linear and exponential models applied to both full and reduced datasets. The results demonstrated consistent participant performance across trials, with higher matching errors at greater stimulus intensities, likely attributable to sensory saturation effects. Furthermore, the study offered practical methodological insights, including the use of reduced data sampling paradigms to enhance experimental efficiency significantly while preserving data integrity. This work advances perceptual equity in multisensory feedback systems, addressing sensory encoding variability to support scalable, personalized therapeutic strategies for stroke recovery.