{"title":"一种基于多传感器融合的抗臂位变异性和疲劳神经假肢鲁棒控制策略。","authors":"Shang Shi, Jianjun Meng, Zongtian Yin, Weichao Guo, Xiangyang Zhu","doi":"10.1088/1741-2552/ade504","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. Multi-modal sensor fusion comprising surface electromyography (sEMG) and A-mode ultrasound (US) has yielded satisfactory performance in gesture recognition, aiding amputees in restoring upper limb function. However, prior research conducted in laboratory settings with consistent arm positions lacks practical application for amputees using prostheses. Additionally, motion tests utilized in current studies necessitate prolonged gesture execution, while constant muscle contractions introduce fatigue and increase misclassification risk in practical applications. Consequently, implementing a robust control is imperative to mitigate the limitations of constant arm positions and muscle contractions.<i>Approach</i>. This paper introduces a novel decoding strategy for online applications based on A-mode US, sEMG, and inertial movement unit (IMU) sensor fusion. The decoding process comprises four stages: arm position selection, sEMG threshold, pattern recognition, and a post-processing strategy, which preserves the previous short-duration hand gesture during rest and aims to improve prosthetic hand control performance for practical applications.<i>Main results</i>. The offline classification accuracy achieves 96.02% based on fusion sensor decoding. It drops to 90.72% for healthy participants when wearing an arm fixture that simulates the load of a real prosthesis. The implementation of the post-processing strategy results in a 92.51% online classification accuracy (ONCA) for recognized gestures in three varied arm positions, significantly higher than the 78.97% ONCA achieved when the post-processing strategy is disabled.<i>Significance</i>. The post-processing strategy mitigates constant muscle contraction, demonstrating high robustness to prosthetic hand control. The proposed online decoding strategy achieves remarkable performance on customized prostheses for two amputees across various arm positions, providing a promising prospect for multi-modal sensor fusion based prosthetic applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A robust neural prosthetic control strategy against arm position variability and fatigue based on multi-sensor fusion.\",\"authors\":\"Shang Shi, Jianjun Meng, Zongtian Yin, Weichao Guo, Xiangyang Zhu\",\"doi\":\"10.1088/1741-2552/ade504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective</i>. Multi-modal sensor fusion comprising surface electromyography (sEMG) and A-mode ultrasound (US) has yielded satisfactory performance in gesture recognition, aiding amputees in restoring upper limb function. However, prior research conducted in laboratory settings with consistent arm positions lacks practical application for amputees using prostheses. Additionally, motion tests utilized in current studies necessitate prolonged gesture execution, while constant muscle contractions introduce fatigue and increase misclassification risk in practical applications. Consequently, implementing a robust control is imperative to mitigate the limitations of constant arm positions and muscle contractions.<i>Approach</i>. This paper introduces a novel decoding strategy for online applications based on A-mode US, sEMG, and inertial movement unit (IMU) sensor fusion. The decoding process comprises four stages: arm position selection, sEMG threshold, pattern recognition, and a post-processing strategy, which preserves the previous short-duration hand gesture during rest and aims to improve prosthetic hand control performance for practical applications.<i>Main results</i>. The offline classification accuracy achieves 96.02% based on fusion sensor decoding. It drops to 90.72% for healthy participants when wearing an arm fixture that simulates the load of a real prosthesis. The implementation of the post-processing strategy results in a 92.51% online classification accuracy (ONCA) for recognized gestures in three varied arm positions, significantly higher than the 78.97% ONCA achieved when the post-processing strategy is disabled.<i>Significance</i>. The post-processing strategy mitigates constant muscle contraction, demonstrating high robustness to prosthetic hand control. The proposed online decoding strategy achieves remarkable performance on customized prostheses for two amputees across various arm positions, providing a promising prospect for multi-modal sensor fusion based prosthetic applications.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ade504\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ade504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A robust neural prosthetic control strategy against arm position variability and fatigue based on multi-sensor fusion.
Objective. Multi-modal sensor fusion comprising surface electromyography (sEMG) and A-mode ultrasound (US) has yielded satisfactory performance in gesture recognition, aiding amputees in restoring upper limb function. However, prior research conducted in laboratory settings with consistent arm positions lacks practical application for amputees using prostheses. Additionally, motion tests utilized in current studies necessitate prolonged gesture execution, while constant muscle contractions introduce fatigue and increase misclassification risk in practical applications. Consequently, implementing a robust control is imperative to mitigate the limitations of constant arm positions and muscle contractions.Approach. This paper introduces a novel decoding strategy for online applications based on A-mode US, sEMG, and inertial movement unit (IMU) sensor fusion. The decoding process comprises four stages: arm position selection, sEMG threshold, pattern recognition, and a post-processing strategy, which preserves the previous short-duration hand gesture during rest and aims to improve prosthetic hand control performance for practical applications.Main results. The offline classification accuracy achieves 96.02% based on fusion sensor decoding. It drops to 90.72% for healthy participants when wearing an arm fixture that simulates the load of a real prosthesis. The implementation of the post-processing strategy results in a 92.51% online classification accuracy (ONCA) for recognized gestures in three varied arm positions, significantly higher than the 78.97% ONCA achieved when the post-processing strategy is disabled.Significance. The post-processing strategy mitigates constant muscle contraction, demonstrating high robustness to prosthetic hand control. The proposed online decoding strategy achieves remarkable performance on customized prostheses for two amputees across various arm positions, providing a promising prospect for multi-modal sensor fusion based prosthetic applications.