Mohammed E. Seno , Niladri Maiti , Maulik Patel , Mihirkumar M. Patel , Kalpesh B. Chaudhary , Ashish Pasaya , Babacar Toure
{"title":"基于深度学习和证据理论的EEG-fNIRS信号集成运动图像分类","authors":"Mohammed E. Seno , Niladri Maiti , Maulik Patel , Mihirkumar M. Patel , Kalpesh B. Chaudhary , Ashish Pasaya , Babacar Toure","doi":"10.1016/j.neuri.2025.100214","DOIUrl":null,"url":null,"abstract":"<div><div>To address the limitations of traditional unimodal brain-computer interface BCI) technologies based on electroencephalography (EEG) such as low spatial resolution and high susceptibility to noise an increasing number of neuroscience-driven studies have begun to focus on BCI systems that fuse EEG signals with functional near-infrared spectroscopy (fNIRS) signals. However, integrating these two heterogeneous neurophysiological signals presents significant challenges. In this work, we propose an innovative end-to-end signal fusion method based on deep learning and evidence theory for motor imagery (MI) classification within the neuroscience domain. For EEG signals, spatiotemporal features are extracted using dual-scale temporal convolution and depthwise separable convolution, and a hybrid attention module is introduced to enhance the network's sensitivity to salient neural patterns. For fNIRS signals, spatial convolution across all channels is employed to explore activation differences among brain regions, and parallel temporal convolution combined with a gated recurrent unit (GRU) captures richer temporal dynamics of the hemodynamic response. At the decision fusion stage, decision outputs from both modalities are first quantified using Dirichlet distribution parameter estimation to model uncertainty, followed by a two-layer reasoning process using Dempster-Shafer Theory (DST) to fuse evidence from basic belief assignment (BBA) methods and both modalities. Experimental evaluation on the publicly available TU-Berlin-A dataset demonstrates the effectiveness of the proposed model, achieving an average accuracy of 83.26%, representing a 3.78% improvement over state-of-the-art methods. These results provide new insights and methodologies for neuroscience-inspired multimodal BCI systems integrating EEG and fNIRS signals.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100214"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EEG–fNIRS signal integration for motor imagery classification using deep learning and evidence theory\",\"authors\":\"Mohammed E. Seno , Niladri Maiti , Maulik Patel , Mihirkumar M. Patel , Kalpesh B. Chaudhary , Ashish Pasaya , Babacar Toure\",\"doi\":\"10.1016/j.neuri.2025.100214\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To address the limitations of traditional unimodal brain-computer interface BCI) technologies based on electroencephalography (EEG) such as low spatial resolution and high susceptibility to noise an increasing number of neuroscience-driven studies have begun to focus on BCI systems that fuse EEG signals with functional near-infrared spectroscopy (fNIRS) signals. However, integrating these two heterogeneous neurophysiological signals presents significant challenges. In this work, we propose an innovative end-to-end signal fusion method based on deep learning and evidence theory for motor imagery (MI) classification within the neuroscience domain. For EEG signals, spatiotemporal features are extracted using dual-scale temporal convolution and depthwise separable convolution, and a hybrid attention module is introduced to enhance the network's sensitivity to salient neural patterns. For fNIRS signals, spatial convolution across all channels is employed to explore activation differences among brain regions, and parallel temporal convolution combined with a gated recurrent unit (GRU) captures richer temporal dynamics of the hemodynamic response. At the decision fusion stage, decision outputs from both modalities are first quantified using Dirichlet distribution parameter estimation to model uncertainty, followed by a two-layer reasoning process using Dempster-Shafer Theory (DST) to fuse evidence from basic belief assignment (BBA) methods and both modalities. Experimental evaluation on the publicly available TU-Berlin-A dataset demonstrates the effectiveness of the proposed model, achieving an average accuracy of 83.26%, representing a 3.78% improvement over state-of-the-art methods. These results provide new insights and methodologies for neuroscience-inspired multimodal BCI systems integrating EEG and fNIRS signals.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100214\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000299\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
EEG–fNIRS signal integration for motor imagery classification using deep learning and evidence theory
To address the limitations of traditional unimodal brain-computer interface BCI) technologies based on electroencephalography (EEG) such as low spatial resolution and high susceptibility to noise an increasing number of neuroscience-driven studies have begun to focus on BCI systems that fuse EEG signals with functional near-infrared spectroscopy (fNIRS) signals. However, integrating these two heterogeneous neurophysiological signals presents significant challenges. In this work, we propose an innovative end-to-end signal fusion method based on deep learning and evidence theory for motor imagery (MI) classification within the neuroscience domain. For EEG signals, spatiotemporal features are extracted using dual-scale temporal convolution and depthwise separable convolution, and a hybrid attention module is introduced to enhance the network's sensitivity to salient neural patterns. For fNIRS signals, spatial convolution across all channels is employed to explore activation differences among brain regions, and parallel temporal convolution combined with a gated recurrent unit (GRU) captures richer temporal dynamics of the hemodynamic response. At the decision fusion stage, decision outputs from both modalities are first quantified using Dirichlet distribution parameter estimation to model uncertainty, followed by a two-layer reasoning process using Dempster-Shafer Theory (DST) to fuse evidence from basic belief assignment (BBA) methods and both modalities. Experimental evaluation on the publicly available TU-Berlin-A dataset demonstrates the effectiveness of the proposed model, achieving an average accuracy of 83.26%, representing a 3.78% improvement over state-of-the-art methods. These results provide new insights and methodologies for neuroscience-inspired multimodal BCI systems integrating EEG and fNIRS signals.
Neuroscience informaticsSurgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology