EEG-based speech imagery decoding by dynamic hypergraph learning within projected and selected feature subspaces.

IF 3.8
Yibing Li, Zhenye Zhao, Jiangchuan Liu, Yong Peng, Kenneth Camilleri, Wanzeng Kong, Andrzej Cichocki
{"title":"EEG-based speech imagery decoding by dynamic hypergraph learning within projected and selected feature subspaces.","authors":"Yibing Li, Zhenye Zhao, Jiangchuan Liu, Yong Peng, Kenneth Camilleri, Wanzeng Kong, Andrzej Cichocki","doi":"10.1088/1741-2552/adeec8","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Speech imagery is a nascent paradigm that is receiving widespread attention in current brain-computer interface (BCI) research. By collecting the electroencephalogram (EEG) data generated when imagining the pronunciation of a sentence or word in human mind, machine learning methods are used to decode the intention that the subject wants to express. Among existing decoding methods, graph is often used as an effective tool to model the data structure; however, in the field of BCI research, the correlations between EEG samples may not be fully characterized by simple pairwise relationships. Therefore, this paper attempts to employ a more effective data structure to model EEG data.<i>Approach.</i>In this paper, we introduce hypergraph to describe the high-order correlations between samples by viewing feature vectors extracted from each sample as vertices and then connecting them through hyperedges. We also dynamically update the weights of hyperedges, the weights of vertices and the structure of the hypergraph in two transformed subspaces, i.e. projected and feature-weighted subspaces. Accordingly, two dynamic hypergraph learning models, i.e. dynamic hypergraph semi-supervised learning within projected subspace (DHSLP) and dynamic hypergraph semi-supervised learning within selected feature subspace (DHSLF), are proposed for speech imagery decoding.<i>Main results.</i>To validate the proposed models, we performed a series of experiments on two EEG datasets. The obtained results demonstrated that both DHSLP and DHSLF have statistically significant improvements in decoding imagined speech intentions to existing studies. Specifically, DHSLP achieved accuracies of 78.40% and 66.64% on the two datasets, while DHSLF achieved accuracies of 71.07% and 63.94%.<i>Significance.</i>Our study indicates the effectiveness of the learned hypergraphs in characterizing the underlying semantic information of imagined contents; besides, interpretable results on quantitatively exploring the discriminative EEG channels in speech imagery decoding are obtained, which lay the foundation for further exploration of the physiological mechanisms during speech imagery.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/adeec8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objective.Speech imagery is a nascent paradigm that is receiving widespread attention in current brain-computer interface (BCI) research. By collecting the electroencephalogram (EEG) data generated when imagining the pronunciation of a sentence or word in human mind, machine learning methods are used to decode the intention that the subject wants to express. Among existing decoding methods, graph is often used as an effective tool to model the data structure; however, in the field of BCI research, the correlations between EEG samples may not be fully characterized by simple pairwise relationships. Therefore, this paper attempts to employ a more effective data structure to model EEG data.Approach.In this paper, we introduce hypergraph to describe the high-order correlations between samples by viewing feature vectors extracted from each sample as vertices and then connecting them through hyperedges. We also dynamically update the weights of hyperedges, the weights of vertices and the structure of the hypergraph in two transformed subspaces, i.e. projected and feature-weighted subspaces. Accordingly, two dynamic hypergraph learning models, i.e. dynamic hypergraph semi-supervised learning within projected subspace (DHSLP) and dynamic hypergraph semi-supervised learning within selected feature subspace (DHSLF), are proposed for speech imagery decoding.Main results.To validate the proposed models, we performed a series of experiments on two EEG datasets. The obtained results demonstrated that both DHSLP and DHSLF have statistically significant improvements in decoding imagined speech intentions to existing studies. Specifically, DHSLP achieved accuracies of 78.40% and 66.64% on the two datasets, while DHSLF achieved accuracies of 71.07% and 63.94%.Significance.Our study indicates the effectiveness of the learned hypergraphs in characterizing the underlying semantic information of imagined contents; besides, interpretable results on quantitatively exploring the discriminative EEG channels in speech imagery decoding are obtained, which lay the foundation for further exploration of the physiological mechanisms during speech imagery.

基于脑电图的语音图像解码,在投影和选择的特征子空间内进行动态超图学习。
目的:语音意象是当前脑机接口(BCI)研究中一个受到广泛关注的新兴范式。通过收集人脑想象句子或单词发音时产生的脑电图(EEG)数据,使用机器学习方法解码受试者想要表达的意图。在现有的解码方法中,图是对数据结构进行建模的有效工具;然而,在脑机接口研究领域,脑电样本之间的相关性可能不能完全用简单的两两关系来表征。因此,本文试图采用一种更有效的数据结构对脑电数据进行建模。方法:在本文中,我们引入超图来描述样本之间的高阶相关性,将从每个样本中提取的特征向量视为顶点,然后通过超边将它们连接起来。我们还动态地更新了两个变换后的子空间(即投影子空间和特征加权子空间)中的超边权、顶点权和超图的结构。据此,提出了投影子空间内的动态超图半监督学习(DHSLP)和选择特征子空间内的动态超图半监督学习(DHSLF)两种用于语音图像解码的动态超图学习模型。为了验证所提出的模型,我们在两个EEG数据集上进行了一系列实验。所得结果表明,DHSLP和DHSLF在解码想象语音意图方面都有统计学上显著的改善。DHSLP在两个数据集上的准确率分别为78.40%和66.64%,DHSLF的准确率分别为71.07%和63.94%。意义:本研究表明习得的超图在表征想象内容的潜在语义信息方面是有效的;此外,还获得了定量探索语音图像解码中辨别脑电通道的可解释性结果,为进一步探索语音图像解码的生理机制奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信