Yang Yu, Zeyu Zhou, Yang Xu, Chen Chen, Weichao Guo, Xinjun Sheng
{"title":"Toward Hand Gesture Recognition Using a Channel-Wise Cumulative Spike Train Image-Driven Model.","authors":"Yang Yu, Zeyu Zhou, Yang Xu, Chen Chen, Weichao Guo, Xinjun Sheng","doi":"10.34133/cbsystems.0219","DOIUrl":null,"url":null,"abstract":"<p><p>Recognizing hand gestures from neural control signals is essential for natural human-machine interaction, which is extensively applied to prosthesis control and rehabilitation. However, establishing associations between the neural control signals of motor units and gestures remains an open question. Here, we propose a channel-wise cumulative spike train (cw-CST) image-driven model (cwCST-CNN) for hand gesture recognition, leveraging the spatial activation patterns of motor unit firings to distinguish motor intentions. Specifically, the cw-CSTs of motor units were decomposed from high-density surface electromyography using a spatial spike detection algorithm and were further reconstructed into images according to their spatial recording positions. Then, the resultant cwCST-images were fed into a customized convolutional neural network to recognize gestures. Additionally, we conducted an experiment involving 10 gestures and 10 subjects and compared the proposed method with 2 root-mean-square (RMS)-based approaches and a cw-CST-based approach, namely, RMS-image-driven convolutional neural network classification model, RMS feature with linear discrimination analysis classifier, and cw-CST discharge rate feature with linear discrimination analysis classifier. The results demonstrated that cwCST-CNN outperformed the other 3 methods with a higher classification accuracy of 96.92% ± 1.77%. Moreover, analysis of cw-CST and RMS features showed that the former had better separability across gestures and consistency considering training and testing datasets. This study provides a new solution and enhances the accuracy of gesture recognition using neural drive signals in human-machine interaction.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"6 ","pages":"0219"},"PeriodicalIF":10.5000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11927004/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyborg and bionic systems (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/cbsystems.0219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Recognizing hand gestures from neural control signals is essential for natural human-machine interaction, which is extensively applied to prosthesis control and rehabilitation. However, establishing associations between the neural control signals of motor units and gestures remains an open question. Here, we propose a channel-wise cumulative spike train (cw-CST) image-driven model (cwCST-CNN) for hand gesture recognition, leveraging the spatial activation patterns of motor unit firings to distinguish motor intentions. Specifically, the cw-CSTs of motor units were decomposed from high-density surface electromyography using a spatial spike detection algorithm and were further reconstructed into images according to their spatial recording positions. Then, the resultant cwCST-images were fed into a customized convolutional neural network to recognize gestures. Additionally, we conducted an experiment involving 10 gestures and 10 subjects and compared the proposed method with 2 root-mean-square (RMS)-based approaches and a cw-CST-based approach, namely, RMS-image-driven convolutional neural network classification model, RMS feature with linear discrimination analysis classifier, and cw-CST discharge rate feature with linear discrimination analysis classifier. The results demonstrated that cwCST-CNN outperformed the other 3 methods with a higher classification accuracy of 96.92% ± 1.77%. Moreover, analysis of cw-CST and RMS features showed that the former had better separability across gestures and consistency considering training and testing datasets. This study provides a new solution and enhances the accuracy of gesture recognition using neural drive signals in human-machine interaction.