Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding.

IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Zhenye Zhao, Yibing Li, Yong Peng, Kenneth Camilleri, Wanzeng Kong
{"title":"Multi-view graph fusion of self-weighted EEG feature representations for speech imagery decoding.","authors":"Zhenye Zhao, Yibing Li, Yong Peng, Kenneth Camilleri, Wanzeng Kong","doi":"10.1016/j.jneumeth.2025.110413","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Electroencephalogram (EEG)-based speech imagery is an emerging brain-computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative.</p><p><strong>New method: </strong>To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF.</p><p><strong>Results: </strong>(1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding.</p><p><strong>Comparison with existing methods: </strong>We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset.</p><p><strong>Conclusions: </strong>MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.</p>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":" ","pages":"110413"},"PeriodicalIF":2.7000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jneumeth.2025.110413","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Electroencephalogram (EEG)-based speech imagery is an emerging brain-computer interface paradigm, which enables the speech disabled to naturally and intuitively communicate with external devices or other people. Currently, speech imagery research decoding performance is limited. One of the reasons is that there is still no consensus on which domain features are more discriminative.

New method: To adaptively capture the complementary information from different domain features, we treat each domain as a view and propose a multi-view graph fusion of self-weighted EEG feature representations (MVGSF) model by learning a consensus graph from multi-view EEG features, based on which the imagery intentions can be effectively decoded. Considering that different EEG features in each view have different discriminative abilities, the view-dependent feature importance exploration strategy is incorporated in MVGSF.

Results: (1) MVGSF exhibits outstanding performance on two public speech imagery datasets (2) The learned consensus graph from multi-view features effectively characterizes the relationships of EEG samples in a progressive manner. (3) Some task-related insights are explored including the feature importance-based identification of critical EEG channels and frequency bands in speech imagery decoding.

Comparison with existing methods: We compared MVGSF with single-view counterparts, other multi-view models, and state-of-the-art models. MVGSF achieved the highest accuracy, with average accuracies of 78.93% on the 2020IBCIC3 dataset and 53.85% on the KaraOne dataset.

Conclusions: MVGSF effectively integrates features from multiple domains to enhance decoding capabilities. Furthermore, through the learned feature importance, MVGSF has made certain contributions to identify the EEG spatial-frequency patterns in speech imagery decoding.

背景:基于脑电图(EEG)的言语意象是一种新兴的脑机接口范例,它能让言语残疾人自然、直观地与外部设备或其他人进行交流。目前,语音意象研究的解码性能有限。其中一个原因是,对于哪个领域的特征更有辨别力还没有达成共识:为了自适应地捕捉不同领域特征的互补信息,我们将每个领域视为一个视图,并通过学习多视图脑电特征的共识图,提出了自加权脑电特征表示的多视图图融合(MVGSF)模型,在此基础上可以有效地解码意象意图。结果:(1)MVGSF 在两个公共语音图像数据集上表现出卓越的性能(2)从多视图特征学习到的共识图以渐进的方式有效地描述了脑电图样本的关系。(3) 探索了一些与任务相关的见解,包括在语音图像解码中基于特征重要性识别关键脑电通道和频带:我们将 MVGSF 与单视角对应模型、其他多视角模型以及最先进的模型进行了比较。MVGSF 的准确率最高,在 2020IBCIC3 数据集上的平均准确率为 78.93%,在 KaraOne 数据集上的平均准确率为 53.85%:MVGSF 有效地整合了多个领域的特征,从而提高了解码能力。此外,通过学习到的特征重要性,MVGSF 在识别语音图像解码中的脑电空间频率模式方面做出了一定的贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Neuroscience Methods
Journal of Neuroscience Methods 医学-神经科学
CiteScore
7.10
自引率
3.30%
发文量
226
审稿时长
52 days
期刊介绍: The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信