Alexander Craik, Heather Dial, Jose L Contreras-Vidal
{"title":"Continuous and discrete decoding of overt speech with scalp electroencephalography (EEG).","authors":"Alexander Craik, Heather Dial, Jose L Contreras-Vidal","doi":"10.1088/1741-2552/ad8d0a","DOIUrl":"10.1088/1741-2552/ad8d0a","url":null,"abstract":"<p><p><i>Objective</i>. Neurological disorders affecting speech production adversely impact quality of life for over 7 million individuals in the US. Traditional speech interfaces like eye-tracking devices and P300 spellers are slow and unnatural for these patients. An alternative solution, speech brain-computer interfaces (BCIs), directly decodes speech characteristics, offering a more natural communication mechanism. This research explores the feasibility of decoding speech features using non-invasive EEG.<i>Approach</i>. Nine neurologically intact participants were equipped with a 63-channel EEG system with additional sensors to eliminate eye artifacts. Participants read aloud sentences selected for phonetic similarity to the English language. Deep learning models, including Convolutional Neural Networks and Recurrent Neural Networks with and without attention modules, were optimized with a focus on minimizing trainable parameters and utilizing small input window sizes for real-time application. These models were employed for discrete and continuous speech decoding tasks.<i>Main results</i>. Statistically significant participant-independent decoding performance was achieved for discrete classes and continuous characteristics of the produced audio signal. A frequency sub-band analysis highlighted the significance of certain frequency bands (delta, theta, gamma) for decoding performance, and a perturbation analysis was used to identify crucial channels. Assessed channel selection methods did not significantly improve performance, suggesting a distributed representation of speech information encoded in the EEG signals. Leave-One-Out training demonstrated the feasibility of utilizing common speech neural correlates, reducing data collection requirements from individual participants.<i>Significance</i>. These findings contribute significantly to the development of EEG-enabled speech synthesis by demonstrating the feasibility of decoding both discrete and continuous speech features from EEG signals, even in the presence of EMG artifacts. By addressing the challenges of EMG interference and optimizing deep learning models for speech decoding, this study lays a strong foundation for EEG-based speech BCIs.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole A Pelot, Boshuo Wang, Daniel P Marshall, Minhaj A Hussain, Eric D Musselman, Gene J Yu, Jahrane Dale, Ian W Baumgart, Daniel Dardani, Princess Tara Zamani, David Chang Villacreses, Joost B Wagenaar, Warren M Grill
{"title":"Guidance for sharing computational models of neural stimulation: from project planning to publication.","authors":"Nicole A Pelot, Boshuo Wang, Daniel P Marshall, Minhaj A Hussain, Eric D Musselman, Gene J Yu, Jahrane Dale, Ian W Baumgart, Daniel Dardani, Princess Tara Zamani, David Chang Villacreses, Joost B Wagenaar, Warren M Grill","doi":"10.1088/1741-2552/adb997","DOIUrl":"10.1088/1741-2552/adb997","url":null,"abstract":"<p><p><i>Objective</i>. Sharing computational models offers many benefits, including increased scientific rigor during project execution, readership of the associated paper, resource usage efficiency, replicability, and reusability. In recognition of the growing practice and requirement of sharing models, code, and data, herein, we provide guidance to facilitate sharing of computational models by providing an accessible resource for regular reference throughout a project's stages.<i>Approach</i>. We synthesized literature on good practices in scientific computing and on code and data sharing with our experience in developing, sharing, and using models of neural stimulation, although the guidance will also apply well to most other types of computational models.<i>Main results</i>. We first describe the '6 R' characteristics of shared models, leaning on prior scientific computing literature, which enforce accountability and enable advancement: re-runnability, repeatability, replicability, reproducibility, reusability, and readability. We then summarize action items associated with good practices in scientific computing, including selection of computational tools during project planning, code and documentation design during development, and user instructions for deployment. We provide a detailed checklist of the contents of shared models and associated materials, including the model itself, code for reproducing published figures, documentation, and supporting datasets. We describe code, model, and data repositories, including a list of characteristics to consider when selecting a platform for sharing. We describe intellectual property (IP) considerations to balance permissive, open-source licenses versus software patents and bespoke licenses that govern and incentivize commercialization. Finally, we exemplify these practices with our ASCENT pipeline for modeling peripheral nerve stimulation.<i>Significance</i>. We hope that this paper will serve as an important and actionable reference for scientists who develop models-from project planning through publication-as well as for model users, institutions, IP experts, journals, funding sources, and repository platform developers.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Manipulating cybersickness in virtual reality-based neurofeedback and its effects on training performance.","authors":"Lisa M Berger, Guilherme Wood, Silvia E Kober","doi":"10.1088/1741-2552/adbd76","DOIUrl":"10.1088/1741-2552/adbd76","url":null,"abstract":"<p><p><i>Objective</i>. Virtual reality (VR) serves as a modern and powerful tool to enrich neurofeedback (NF) and brain-computer interface (BCI) applications as well as to achieve higher user motivation and adherence to training. However, between 20%-80% of all the users develop symptoms of cybersickness (CS), namely nausea, oculomotor problems or disorientation during VR interaction, which influence user performance and behavior in VR. Hence, we investigated whether CS-inducing VR paradigms influence the success of a NF training task.<i>Approach</i>. We tested 39 healthy participants (20 female) in a single-session VR-based NF study. One half of the participants was presented with a high CS-inducing VR-environment where movement speed, field of view and camera angle were varied in a CS-inducing fashion throughout the session and the other half underwent NF training in a less CS-inducing VR environment, where those parameters were held constant. The NF training consisted of 6 runs of 3 min each, in which participants should increase their sensorimotor rhythm (SMR, 12-15 Hz) while keeping artifact control frequencies constant (Theta 4-7 Hz, Beta 16-30 Hz). Heart rate and subjectively experienced CS were also assessed.<i>Main results</i>. The high CS-inducing condition tended to lead to more subjectively experienced CS nausea symptoms than the low CS-inducing condition. Further, women experienced more CS, a higher heart rate and showed a worse NF performance compared to men. However, the SMR activity during the NF training was comparable between both the high and low CS-inducing groups. Both groups were able to increase their SMR across feedback runs, although, there was a tendency of higher SMR power for male participants in the low CS group.<i>Significance</i>. Hence, sickness symptoms in VR do not necessarily impair NF/BCI training success. This takes us one step further in evaluating the practicability of VR in BCI and NF applications. Nevertheless, inter-individual differences in CS susceptibility should be taken into account for VR-based NF applications.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mervyn Jun Rui Lim, Jack Yu Tung Lo, Yong Yi Tan, Hong-Yi Lin, Yuhang Wang, Dewei Tan, Eugene Wang, Yin Yin Naing Ma, Joel Jia Wei Ng, Ryan Ashraf Jefree, Yeo Tseng Tsai
{"title":"The state-of-the-art of invasive brain-computer interfaces in humans: a systematic review and individual patient meta-analysis.","authors":"Mervyn Jun Rui Lim, Jack Yu Tung Lo, Yong Yi Tan, Hong-Yi Lin, Yuhang Wang, Dewei Tan, Eugene Wang, Yin Yin Naing Ma, Joel Jia Wei Ng, Ryan Ashraf Jefree, Yeo Tseng Tsai","doi":"10.1088/1741-2552/adb88e","DOIUrl":"10.1088/1741-2552/adb88e","url":null,"abstract":"<p><p><i>Objective.</i>Invasive brain-computer interfaces (iBCIs) have evolved significantly since the first neurotrophic electrode was implanted in a human subject three decades ago. Since then, both hardware and software advances have increased the iBCI performance to enable tasks such as decoding conversations in real-time and manipulating external limb prostheses with haptic feedback. In this systematic review, we aim to evaluate the advances in iBCI hardware, software and functionality and describe challenges and opportunities in the iBCI field.<i>Approach.</i>Medline, EMBASE, PubMed and Cochrane databases were searched from inception until 13 April 2024. Primary studies reporting the use of iBCI in human subjects to restore function were included. Endpoints extracted include iBCI electrode type, iBCI implantation, decoder algorithm, iBCI effector, testing and training methodology and functional outcomes. Narrative synthesis of outcomes was done with a focus on hardware and software development trends over time. Individual patient data (IPD) was also collected and an IPD meta-analysis was done to identify factors significant to iBCI performance.<i>Main results.</i>93 studies involving 214 patients were included in this systematic review. The median task performance accuracy for cursor control tasks was 76.00% (Interquartile range [IQR] = 21.2), for motor tasks was 80.00% (IQR = 23.3), and for communication tasks was 93.27% (IQR = 15.3). Current advances in iBCI software include use of recurrent neural network architectures as decoders, while hardware advances such as intravascular stentrodes provide a less invasive alternative for neural recording. Challenges include the lack of standardized testing paradigms for specific functional outcomes and issues with portability and chronicity limiting iBCI usage to laboratory settings.<i>Significance.</i>Our systematic review demonstrated the exponential rate at which iBCIs have evolved over the past two decades. Yet, more work is needed for widespread clinical adoption and translation to long-term home-use.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GraphSleepFormer: a multi-modal graph neural network for sleep staging in OSA patients.","authors":"Chen Wang, Xiuquan Jiang, Chengyan Lv, Qi Meng, Pengcheng Zhao, Di Yan, Chao Feng, Fangzhou Xu, Shanshan Lu, Tzyy-Ping Jung, Jiancai Leng","doi":"10.1088/1741-2552/adb996","DOIUrl":"10.1088/1741-2552/adb996","url":null,"abstract":"<p><p><i>Objective.</i>Obstructive sleep apnea (OSA) is a prevalent sleep disorder. Accurate sleep staging is one of the prerequisites in the study of sleep-related disorders and the evaluation of sleep quality. We introduce a novel GraphSleepFormer (GSF) network designed to effectively capture global dependencies and node characteristics in graph-structured data.<i>Approach.</i>The network incorporates centrality coding and spatial coding into its architecture. It employs adaptive learning of adjacency matrices for spatial encoding between channels located on the head, thereby encoding graph structure information to enhance the model's representation and understanding of spatial relationships. Centrality encoding integrates the degree matrix into node features, assigning varying degrees of attention to different channels. Ablation experiments demonstrate the effectiveness of these encoding methods. The Shapley Additive Explanations (SHAP) method was employed to evaluate the contribution of each channel in sleep staging, highlighting the necessity of using multimodal data.<i>Main results.</i>We trained our model on overnight polysomnography data collected from 28 OSA patients in a clinical setting and achieved an overall accuracy of 80.10%. GSF achieved performance comparable to state-of-the-art methods on two subsets of the ISRUC database.<i>Significance.</i>The GSF Accurately identifies sleep periods, providing a critical basis for diagnosing and treating OSA, thereby contributing to advancements in sleep medicine.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel R Parker, Xavier J Lee, Jonathan S Calvert, David A Borton
{"title":"xDev: a mixed-signal, software-defined neurotechnology interface platform for accelerated system development.","authors":"Samuel R Parker, Xavier J Lee, Jonathan S Calvert, David A Borton","doi":"10.1088/1741-2552/adb7bf","DOIUrl":"10.1088/1741-2552/adb7bf","url":null,"abstract":"<p><p><i>Objective.</i>Advances in electronics and materials science have led to the development of sophisticated components for clinical and research neurotechnology systems. However, instrumentation to easily evaluate how these components function in a complete system does not yet exist. In this work, we set out to design and validate a software-defined mixed-signal routing fabric, 'xDev', that enables neurotechnology system designers to rapidly iterate, evaluate, and deploy advanced multi-component systems.<i>Approach.</i>We developed a set of system requirements for xDev, and implemented a design based on a 16 × 16 analog crosspoint multiplexer. We then tested the impedance and switching characteristics of the design, assessed signal gain and crosstalk attenuation across biological and high-speed digital signaling frequencies, and evaluated the ability of xDev to flexibly reroute microvolt-scale amplitude and high-speed signals. Finally, we conducted an intraoperative<i>in vivo</i>deployment of xDev to rapidly conduct neuromodulation experiments using diverse neurotechnology submodules.<i>Main results.</i>The xDev system impedance matching, crosstalk attenuation, and frequency response characteristics accurately transmitted signals over a broad range of frequencies, encapsulating features typical of biosignals and extending into high-speed digital ranges. Microvolt-scale biosignals and 600 Mbps Ethernet connections were accurately routed through the fabric. These performance characteristics culminated in an<i>in vivo</i>demonstration of the flexibility of the system via implanted spinal electrode arrays in an ovine model.<i>Significance.</i>xDev represents a first-of-its-kind, low-cost, software-defined neurotechnology development accelerator platform. Through the public, open-source distribution of our designs, we lower the obstacles facing the development of future neurotechnology systems.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":"22 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894552/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Explainable multiscale temporal convolutional neural network model for sleep stage detection based on electroencephalogram activities.","authors":"Chun-Ren Phang, Akimasa Hirata","doi":"10.1088/1741-2552/adb90c","DOIUrl":"10.1088/1741-2552/adb90c","url":null,"abstract":"<p><p><i>Objective.</i>Humans spend a significant portion of their lives in sleep (an essential driver of body metabolism). Moreover, as sleep deprivation could cause various health complications, it is crucial to develop an automatic sleep stage detection model to facilitate the tedious manual labeling process. Notably, recently proposed sleep staging algorithms lack model explainability and still require performance improvement.<i>Approach.</i>We implemented multiscale neurophysiology-mimicking kernels to capture sleep-related electroencephalogram (EEG) activities at varying frequencies and temporal lengths; the implemented model was named 'multiscale temporal convolutional neural network (MTCNN).' Further, we evaluated its performance using an open-source dataset (Sleep-EDF Database Expanded comprising 153 d of polysomnogram data).<i>Main results.</i>By investigating the learned kernel weights, we observed that MTCNN detected the EEG activities specific to each sleep stage, such as the frequencies, K-complexes, and sawtooth waves. Furthermore, regarding the characterization of these neurophysiologically significant features, MTCNN demonstrated an overall accuracy (OAcc) of 91.12% and a Cohen kappa coefficient of 0.86 in the cross-subject paradigm. Notably, it demonstrated an OAcc of 88.24% and a Cohen kappa coefficient of 0.80 in the leave-few-days-out analysis. Our MTCNN model also outperformed the existing deep learning models in sleep stage classification even when it was trained with only 16% of the total EEG data, achieving an OAcc of 85.62% and a Cohen kappa coefficient of 0.75 on the remaining 84% of testing data.<i>Significance.</i>The proposed MTCNN enables model explainability and it can be trained with lesser amount of data, which is beneficial to its application in the real-world because large amounts of training data are not often and readily available.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extension of the visibility concept for EEG signal processing.","authors":"Valentin Debenay, Grégory Turbelin, Jean-Pierre Issartel, Philippe Courmontagne, Amine Chellali, Marie-Hélène Ferrer","doi":"10.1088/1741-2552/adb994","DOIUrl":"10.1088/1741-2552/adb994","url":null,"abstract":"<p><p><i>Objective</i>. Visibility is an intrinsic property of any network of sensors that describes the regions in which its measurement sensitivity is concentrated. Initially introduced to describe the global spatial sensitivity of air pollution monitoring networks, we propose to extend the concept of visibility to characterize the detection capabilities of electroencephalography (EEG) systems utilized to measure brain electrical activity.<i>Approach</i>. In this paper, we represent visibility within the brain as a field of symmetric 3 × 3 matrices, satisfying the so-called 'renormalization conditions' and interpreted as second-order tensors. A compact and computationally efficient iterative algorithm is proposed for computing this tensor field. In addition, we explain how to visualize and present the visibility information in an intuitive and easily understandable way.<i>Main results</i>. The visibility concept is exploited to evaluate and compare the ability of three consumer-grade EEG headsets to detect and localize an arbitrary current distribution in the brain. Additionally, visibility is applied to derive an inverse solution that can solve the neuroelectromagnetic inverse problem (NIP) by reconstructing focal brain sources from EEG data.<i>Significance</i>. Although the lead field function approach can be employed to describe the sensitivity of individual electrodes from an EEG headset, this paper extends the sensor network's visibility concept to characterize the sensing capabilities of a complete EEG system. The comparison between three consumer-grade EEG headsets shows that the size of the low-visibility brain area decreases when the number of electrodes used increases. In addition, we show that the source parameters are best estimated by the inverse solution when they are oriented towards the maximum visibility direction.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Konrad, Kate R Gelman, Jesse Lawrence, Sanjay Bhatia, Dister Jacqueline, Radhey Sharma, Elton Ho, Yoon Woo Byun, Craig H Mermel, Benjamin I Rapoport
{"title":"First-in-human experience performing high-resolution cortical mapping using a novel microelectrode array containing 1024 electrodes.","authors":"Peter Konrad, Kate R Gelman, Jesse Lawrence, Sanjay Bhatia, Dister Jacqueline, Radhey Sharma, Elton Ho, Yoon Woo Byun, Craig H Mermel, Benjamin I Rapoport","doi":"10.1088/1741-2552/adaeed","DOIUrl":"10.1088/1741-2552/adaeed","url":null,"abstract":"<p><p><i>Objective.</i>Localization of function within the brain and central nervous system is an essential aspect of clinical neuroscience. Classical descriptions of functional neuroanatomy provide a foundation for understanding the functional significance of identifiable anatomic structures. However, individuals exhibit substantial variation, particularly in the presence of disorders that alter tissue structure or impact function. Furthermore, functional regions do not always correspond to identifiable structural features. Understanding function at the level of individual patients-and diagnosing and treating such patients-often requires techniques capable of correlating neural activity with cognition, behavior, and experience in anatomically precise ways.<i>Approach</i>. Recent advances in brain-computer interface technology have given rise to a new generation of electrophysiologic tools for scalable, nondestructive functional mapping with spatial precision in the range of tens to hundreds of micrometers, and temporal resolutions in the range of tens to hundreds of microseconds. Here we describe our initial intraoperative experience with novel, thin-film arrays containing 1024 surface microelectrodes for electrocorticographic mapping in a first-in-human study.<i>Main results</i>. Eight patients undergoing standard electrophysiologic cortical mapping during resection of eloquent-region brain tumors consented to brief sessions of concurrent mapping (micro-electrocorticography) using the novel arrays. Four patients underwent motor mapping using somatosensory evoked potentials (SSEPs) while under general anesthesia, and four underwent awake language mapping, using both standard paradigms and the novel microelectrode array. SSEP phase reversal was identified in the region predicted by conventional mapping, but at higher resolution (0.4 mm) and as a contour rather than as a point. In Broca's area (confirmed by direct cortical stimulation), speech planning was apparent in the micro-electrocorticogram as high-amplitude beta-band activity immediately prior to the articulatory event.<i>Significance</i>. These findings support the feasibility and potential clinical utility of incorporating micro-electrocorticography into the intraoperative workflow for systematic cortical mapping of functional brain regions.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EEG-based recognition of hand movement and its parameter.","authors":"Yuxuan Yan, Jianguang Li, Mingyue Yin","doi":"10.1088/1741-2552/adba8a","DOIUrl":"10.1088/1741-2552/adba8a","url":null,"abstract":"<p><p><i>Objecitve</i>. Brain-computer interface is a cutting-edge technology that enables interaction with external devices by decoding human intentions, and is highly valuable in the fields of medical rehabilitation and human-robot collaboration. The technique of decoding motor intent for motor execution (ME) based on electroencephalographic (EEG) signals is in the feasibility study stage by now. There are still insufficient studies on the accuracy of ME EEG signal recognition in between-subjects classification to reach the level of realistic applications. This paper aims to investigate EEG signal-based hand movement recognition by analyzing low-frequency time-domain information.<i>Approach</i>. Experiments with four types of hand movements, two force parameter (picking up and pushing) tasks, and a four-target directional displacement task were designed and executed, and the EEG data from thirteen healthy volunteers was collected. Sliding window approach is used to expand the dataset in order to address the issue of EEG signal overfitting. Furtherly, Convolutional Neural Network (CNN)-Bidirectional Long Short-Term Memory Network (BiLSTM) model, an end-to-end serial combination of a BiLSTM and (CNN) is constructed to classify and recognize the hand movement based on the raw EEG data.<i>Main results</i>. According to the experimental results, the model is able to categorize four types of hand movements, picking up movements, pushing movements, and four target direction displacement movements with an accuracy of 99.14% ± 0.49%, 99.29% ± 0.11%, 99.23% ± 0.60%, and 98.11% ± 0.23%, respectively.<i>Significance</i>. Furthermore, comparative tests conducted with alternative deep learning models (LSTM, CNN, EEGNet, CNN-LSTM) demonstrates that the CNN-BiLSTM model is with practicable accuracy in terms of EEG-based hand movement recognition and its parameter decoding.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}