EFDFNet: A multimodal deep fusion network based on feature disentanglement for attention state classification

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Haiqi Xu , Qingshan She , Ming Meng , Yunyuan Gao , Yingchun Zhang
{"title":"EFDFNet: A multimodal deep fusion network based on feature disentanglement for attention state classification","authors":"Haiqi Xu ,&nbsp;Qingshan She ,&nbsp;Ming Meng ,&nbsp;Yunyuan Gao ,&nbsp;Yingchun Zhang","doi":"10.1016/j.bspc.2025.108042","DOIUrl":null,"url":null,"abstract":"<div><div>The classification of attention states utilizing both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) is pivotal in understanding human cognitive functions. While multimodal algorithms have been explored within brain-computer interface (BCI) research, the integration of modal features often falls short of efficacy. Moreover, comprehensive multimodal classification studies employing deep learning techniques for attention state classification are limited. This paper proposes a novel EEG-fNIRS multimodal deep fusion framework (EFDFNet), which employs fNIRS features to enhance EEG feature disentanglement and uses a deep fusion strategy for effective multimodal feature integration. Additionally, we have developed EMCNet, an attention state classification network for the EEG modality, which combines Mamba and Transformer to optimize the extraction of EEG features. We evaluated our method on two attention state classification datasets and one motor imagery dataset, i.e., mental arithmetic (MA), word generation (WG) and motor imagery (MI). The results show that EMCNet achieved classification accuracies of 86.11%, 79.47% and 75.77% on the MA, WG and MI datasets using only the EEG modality. With multimodal fusion, EFDFNet improved these results to 87.31%, 80.90% and 85.61%, respectively, highlighting the benefits of multimodal fusion. Both EMCNet and EFDFNet deliver state-of-the-art performance and are expected to set new baselines for EEG-fNIRS multimodal fusion.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 108042"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425005531","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The classification of attention states utilizing both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) is pivotal in understanding human cognitive functions. While multimodal algorithms have been explored within brain-computer interface (BCI) research, the integration of modal features often falls short of efficacy. Moreover, comprehensive multimodal classification studies employing deep learning techniques for attention state classification are limited. This paper proposes a novel EEG-fNIRS multimodal deep fusion framework (EFDFNet), which employs fNIRS features to enhance EEG feature disentanglement and uses a deep fusion strategy for effective multimodal feature integration. Additionally, we have developed EMCNet, an attention state classification network for the EEG modality, which combines Mamba and Transformer to optimize the extraction of EEG features. We evaluated our method on two attention state classification datasets and one motor imagery dataset, i.e., mental arithmetic (MA), word generation (WG) and motor imagery (MI). The results show that EMCNet achieved classification accuracies of 86.11%, 79.47% and 75.77% on the MA, WG and MI datasets using only the EEG modality. With multimodal fusion, EFDFNet improved these results to 87.31%, 80.90% and 85.61%, respectively, highlighting the benefits of multimodal fusion. Both EMCNet and EFDFNet deliver state-of-the-art performance and are expected to set new baselines for EEG-fNIRS multimodal fusion.
EFDFNet:一种基于特征解纠缠的多模态深度融合网络
利用脑电图(EEG)和功能近红外光谱(fNIRS)对注意状态进行分类是理解人类认知功能的关键。虽然在脑机接口(BCI)研究中对多模态算法进行了探索,但模态特征的整合往往效果不佳。此外,利用深度学习技术进行注意力状态分类的综合多模态分类研究还很有限。本文提出了一种新的EEG-fNIRS多模态深度融合框架(EFDFNet),该框架利用fNIRS特征增强脑电特征解纠缠,并利用深度融合策略进行有效的多模态特征融合。此外,我们还开发了EMCNet,一种针对EEG模态的注意力状态分类网络,该网络结合了Mamba和Transformer来优化EEG特征的提取。我们在两个注意状态分类数据集和一个运动意象数据集上对我们的方法进行了评估,即心算(MA)、词生成(WG)和运动意象(MI)。结果表明,EMCNet仅使用EEG模式对MA、WG和MI数据集的分类准确率分别为86.11%、79.47%和75.77%。EFDFNet将多模态融合的结果分别提高到87.31%、80.90%和85.61%,突出了多模态融合的优势。EMCNet和EFDFNet都提供了最先进的性能,并有望为EEG-fNIRS多模态融合设定新的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信