{"title":"利用CSP-TP特征融合增强脑机接口通信的基于脑电图的想象语音识别","authors":"Haresh M.V., Kannadasan K., Shameedha Begum B.","doi":"10.1016/j.bbr.2025.115652","DOIUrl":null,"url":null,"abstract":"<div><h3>Background:</h3><div>Imagined speech has emerged as a promising paradigm for intuitive control of brain-computer interface (BCI)-based communication systems, providing a means of communication for individuals with severe brain disabilities. In this work, a non-invasive electroencephalogram (EEG)-based automated imagined speech recognition model was proposed to assist communication to convey the individual’s intentions or commands. The proposed approach uses Common Spatial Patterns (CSP) and Temporal Patterns (TP) for feature extraction, followed by feature fusion to capture both spatial and temporal dynamics in EEG signals. This fusion of the CSP and TP domains enhances the discriminative power of the extracted features, leading to improved classification accuracy.</div></div><div><h3>New method:</h3><div>An EEG data set was collected from 15 subjects while performing an imagined speech task with a set of ten words that are more suitable for paralyzed patients. The EEG signals were preprocessed and a set of statistical characteristics was extracted from the fused CSP and TP domains. Spectral analysis of the signals was performed with respect to ten imagined words to identify the underlying patterns in EEG. Machine learning models, including Linear Discriminant Analysis (LDA), Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR), were employed for pairwise and multiclass classification.</div></div><div><h3>Results:</h3><div>The proposed model achieved average classification accuracies of 83.83% <span><math><mo>±</mo></math></span> 5.94 and 64.58% <span><math><mo>±</mo></math></span> 10.43 and maximum accuracies of 97.78% and 79.22% for pairwise and multiclass classification, respectively. These results demonstrate the effectiveness of the CSP-TP feature fusion approach, outperforming existing state-of-the-art methods in imagined speech recognition.</div></div><div><h3>Conclusion:</h3><div>The findings suggest that EEG-based automatic imagined speech recognition (AISR) using CSP and TP techniques has significant potential for use in BCI-based assistive technologies, offering a more natural and intuitive means of communication for individuals with severe communication limitations.</div></div>","PeriodicalId":8823,"journal":{"name":"Behavioural Brain Research","volume":"493 ","pages":"Article 115652"},"PeriodicalIF":2.6000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An EEG-based imagined speech recognition using CSP-TP feature fusion for enhanced BCI communication\",\"authors\":\"Haresh M.V., Kannadasan K., Shameedha Begum B.\",\"doi\":\"10.1016/j.bbr.2025.115652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background:</h3><div>Imagined speech has emerged as a promising paradigm for intuitive control of brain-computer interface (BCI)-based communication systems, providing a means of communication for individuals with severe brain disabilities. In this work, a non-invasive electroencephalogram (EEG)-based automated imagined speech recognition model was proposed to assist communication to convey the individual’s intentions or commands. The proposed approach uses Common Spatial Patterns (CSP) and Temporal Patterns (TP) for feature extraction, followed by feature fusion to capture both spatial and temporal dynamics in EEG signals. This fusion of the CSP and TP domains enhances the discriminative power of the extracted features, leading to improved classification accuracy.</div></div><div><h3>New method:</h3><div>An EEG data set was collected from 15 subjects while performing an imagined speech task with a set of ten words that are more suitable for paralyzed patients. The EEG signals were preprocessed and a set of statistical characteristics was extracted from the fused CSP and TP domains. Spectral analysis of the signals was performed with respect to ten imagined words to identify the underlying patterns in EEG. Machine learning models, including Linear Discriminant Analysis (LDA), Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR), were employed for pairwise and multiclass classification.</div></div><div><h3>Results:</h3><div>The proposed model achieved average classification accuracies of 83.83% <span><math><mo>±</mo></math></span> 5.94 and 64.58% <span><math><mo>±</mo></math></span> 10.43 and maximum accuracies of 97.78% and 79.22% for pairwise and multiclass classification, respectively. These results demonstrate the effectiveness of the CSP-TP feature fusion approach, outperforming existing state-of-the-art methods in imagined speech recognition.</div></div><div><h3>Conclusion:</h3><div>The findings suggest that EEG-based automatic imagined speech recognition (AISR) using CSP and TP techniques has significant potential for use in BCI-based assistive technologies, offering a more natural and intuitive means of communication for individuals with severe communication limitations.</div></div>\",\"PeriodicalId\":8823,\"journal\":{\"name\":\"Behavioural Brain Research\",\"volume\":\"493 \",\"pages\":\"Article 115652\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Behavioural Brain Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0166432825002384\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavioural Brain Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166432825002384","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
An EEG-based imagined speech recognition using CSP-TP feature fusion for enhanced BCI communication
Background:
Imagined speech has emerged as a promising paradigm for intuitive control of brain-computer interface (BCI)-based communication systems, providing a means of communication for individuals with severe brain disabilities. In this work, a non-invasive electroencephalogram (EEG)-based automated imagined speech recognition model was proposed to assist communication to convey the individual’s intentions or commands. The proposed approach uses Common Spatial Patterns (CSP) and Temporal Patterns (TP) for feature extraction, followed by feature fusion to capture both spatial and temporal dynamics in EEG signals. This fusion of the CSP and TP domains enhances the discriminative power of the extracted features, leading to improved classification accuracy.
New method:
An EEG data set was collected from 15 subjects while performing an imagined speech task with a set of ten words that are more suitable for paralyzed patients. The EEG signals were preprocessed and a set of statistical characteristics was extracted from the fused CSP and TP domains. Spectral analysis of the signals was performed with respect to ten imagined words to identify the underlying patterns in EEG. Machine learning models, including Linear Discriminant Analysis (LDA), Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR), were employed for pairwise and multiclass classification.
Results:
The proposed model achieved average classification accuracies of 83.83% 5.94 and 64.58% 10.43 and maximum accuracies of 97.78% and 79.22% for pairwise and multiclass classification, respectively. These results demonstrate the effectiveness of the CSP-TP feature fusion approach, outperforming existing state-of-the-art methods in imagined speech recognition.
Conclusion:
The findings suggest that EEG-based automatic imagined speech recognition (AISR) using CSP and TP techniques has significant potential for use in BCI-based assistive technologies, offering a more natural and intuitive means of communication for individuals with severe communication limitations.
期刊介绍:
Behavioural Brain Research is an international, interdisciplinary journal dedicated to the publication of articles in the field of behavioural neuroscience, broadly defined. Contributions from the entire range of disciplines that comprise the neurosciences, behavioural sciences or cognitive sciences are appropriate, as long as the goal is to delineate the neural mechanisms underlying behaviour. Thus, studies may range from neurophysiological, neuroanatomical, neurochemical or neuropharmacological analysis of brain-behaviour relations, including the use of molecular genetic or behavioural genetic approaches, to studies that involve the use of brain imaging techniques, to neuroethological studies. Reports of original research, of major methodological advances, or of novel conceptual approaches are all encouraged. The journal will also consider critical reviews on selected topics.