Haiqi Xu , Qingshan She , Ming Meng , Yunyuan Gao , Yingchun Zhang
{"title":"EFDFNet: A multimodal deep fusion network based on feature disentanglement for attention state classification","authors":"Haiqi Xu , Qingshan She , Ming Meng , Yunyuan Gao , Yingchun Zhang","doi":"10.1016/j.bspc.2025.108042","DOIUrl":null,"url":null,"abstract":"<div><div>The classification of attention states utilizing both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) is pivotal in understanding human cognitive functions. While multimodal algorithms have been explored within brain-computer interface (BCI) research, the integration of modal features often falls short of efficacy. Moreover, comprehensive multimodal classification studies employing deep learning techniques for attention state classification are limited. This paper proposes a novel EEG-fNIRS multimodal deep fusion framework (EFDFNet), which employs fNIRS features to enhance EEG feature disentanglement and uses a deep fusion strategy for effective multimodal feature integration. Additionally, we have developed EMCNet, an attention state classification network for the EEG modality, which combines Mamba and Transformer to optimize the extraction of EEG features. We evaluated our method on two attention state classification datasets and one motor imagery dataset, i.e., mental arithmetic (MA), word generation (WG) and motor imagery (MI). The results show that EMCNet achieved classification accuracies of 86.11%, 79.47% and 75.77% on the MA, WG and MI datasets using only the EEG modality. With multimodal fusion, EFDFNet improved these results to 87.31%, 80.90% and 85.61%, respectively, highlighting the benefits of multimodal fusion. Both EMCNet and EFDFNet deliver state-of-the-art performance and are expected to set new baselines for EEG-fNIRS multimodal fusion.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"109 ","pages":"Article 108042"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425005531","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The classification of attention states utilizing both electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) is pivotal in understanding human cognitive functions. While multimodal algorithms have been explored within brain-computer interface (BCI) research, the integration of modal features often falls short of efficacy. Moreover, comprehensive multimodal classification studies employing deep learning techniques for attention state classification are limited. This paper proposes a novel EEG-fNIRS multimodal deep fusion framework (EFDFNet), which employs fNIRS features to enhance EEG feature disentanglement and uses a deep fusion strategy for effective multimodal feature integration. Additionally, we have developed EMCNet, an attention state classification network for the EEG modality, which combines Mamba and Transformer to optimize the extraction of EEG features. We evaluated our method on two attention state classification datasets and one motor imagery dataset, i.e., mental arithmetic (MA), word generation (WG) and motor imagery (MI). The results show that EMCNet achieved classification accuracies of 86.11%, 79.47% and 75.77% on the MA, WG and MI datasets using only the EEG modality. With multimodal fusion, EFDFNet improved these results to 87.31%, 80.90% and 85.61%, respectively, highlighting the benefits of multimodal fusion. Both EMCNet and EFDFNet deliver state-of-the-art performance and are expected to set new baselines for EEG-fNIRS multimodal fusion.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.