TFDISNet: Temporal-frequency domain-invariant and domain-specific feature learning network for enhanced auditory attention decoding from EEG signals.

IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Zhongcai He, Yongxiong Wang
{"title":"TFDISNet: Temporal-frequency domain-invariant and domain-specific feature learning network for enhanced auditory attention decoding from EEG signals.","authors":"Zhongcai He, Yongxiong Wang","doi":"10.1088/2057-1976/ae09b2","DOIUrl":null,"url":null,"abstract":"<p><p>Auditory Attention Decoding (AAD) from Electroencephalogram (EEG) signals presents a significant challenge in brain-computer interface (BCI) research due to the intricate nature of neural patterns. Existing approaches often fail to effectively integrate temporal and frequency domain information, resulting in constrained classification accuracy and robustness. To address these shortcomings, a novel framework, termed the Temporal-Frequency Domain-Invariant and Domain-Specific Feature Learning Network (TFDISNet), is proposed to enhance AAD performance. A dual-branch architecture is utilized to independently extract features from the temporal and frequency domains, which are subsequently fused through an advanced integration strategy. Within the fusion module, shared features, common across both domains, are aligned by minimizing a similarity loss, while domain-specific features, essential for the task, are preserved through the application of a dissimilarity loss. Additionally, a reconstruction loss is employed to ensure that the fused features accurately represent the original signal. These fused features are then subjected to classification, effectively capturing both shared and unique characteristics to improve the robustness and accuracy of AAD. Experimental results show TFDISNet outperforms state-of-the-art models, achieving 97.1% accuracy on the KUL dataset and 88.2% on the DTU dataset with a 2 s window, validated across group, subject-specific, and cross-subject analyses. Component studies confirm that integrating temporal and frequency features boosts performance, with the full TFDISNet surpassing its variants. Its dual-branch design and advanced loss functions establish a robust EEG-based AAD framework, setting a new field standard.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ae09b2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Auditory Attention Decoding (AAD) from Electroencephalogram (EEG) signals presents a significant challenge in brain-computer interface (BCI) research due to the intricate nature of neural patterns. Existing approaches often fail to effectively integrate temporal and frequency domain information, resulting in constrained classification accuracy and robustness. To address these shortcomings, a novel framework, termed the Temporal-Frequency Domain-Invariant and Domain-Specific Feature Learning Network (TFDISNet), is proposed to enhance AAD performance. A dual-branch architecture is utilized to independently extract features from the temporal and frequency domains, which are subsequently fused through an advanced integration strategy. Within the fusion module, shared features, common across both domains, are aligned by minimizing a similarity loss, while domain-specific features, essential for the task, are preserved through the application of a dissimilarity loss. Additionally, a reconstruction loss is employed to ensure that the fused features accurately represent the original signal. These fused features are then subjected to classification, effectively capturing both shared and unique characteristics to improve the robustness and accuracy of AAD. Experimental results show TFDISNet outperforms state-of-the-art models, achieving 97.1% accuracy on the KUL dataset and 88.2% on the DTU dataset with a 2 s window, validated across group, subject-specific, and cross-subject analyses. Component studies confirm that integrating temporal and frequency features boosts performance, with the full TFDISNet surpassing its variants. Its dual-branch design and advanced loss functions establish a robust EEG-based AAD framework, setting a new field standard.

基于时域频域不变性和特定域特征学习网络的脑电信号听觉注意增强解码。
由于脑电图(EEG)信号的听觉注意解码(AAD)是脑机接口(BCI)研究中的一个重大挑战。现有方法往往不能有效地整合时域和频域信息,导致分类精度和鲁棒性受到限制。为了解决这些缺点,提出了一种新的框架,称为时频域不变和特定域特征学习网络(TFDISNet),以提高AAD的性能。利用双分支架构分别从时域和频域提取特征,然后通过高级集成策略将其融合。在融合模块中,通过最小化相似性损失来对齐跨两个领域的共享特征,而通过应用不相似性损失来保留任务所必需的特定于领域的特征。此外,采用重构损失来确保融合特征准确地代表原始信号。然后对这些融合的特征进行分类,有效地捕获共享特征和独特特征,以提高AAD的鲁棒性和准确性。实验结果表明,TFDISNet优于最先进的模型,在2秒的窗口内,在KUL数据集上达到97.1%的准确率,在DTU数据集上达到88.2%的准确率,经过了跨组、特定主题和跨主题分析的验证。组件研究证实,集成时间和频率特征可以提高性能,完整的TFDISNet超过其变体。它的双支路设计和先进的损耗函数建立了一个强大的基于脑电图的AAD框架,设定了新的领域标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Physics & Engineering Express
Biomedical Physics & Engineering Express RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.80
自引率
0.00%
发文量
153
期刊介绍: BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信