基于短窗口脑电图的听觉注意解码用于智能医疗的神经适应性听力支持

Ihtiram Raza Khan , Sheng-Lung Peng , Rupali Mahajan , Rajesh Dey
{"title":"基于短窗口脑电图的听觉注意解码用于智能医疗的神经适应性听力支持","authors":"Ihtiram Raza Khan ,&nbsp;Sheng-Lung Peng ,&nbsp;Rupali Mahajan ,&nbsp;Rajesh Dey","doi":"10.1016/j.neuri.2025.100222","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.</div></div><div><h3>Methods</h3><div>To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.</div></div><div><h3>Results</h3><div>Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.</div></div><div><h3>Conclusions</h3><div>These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100222"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Short-window EEG-based auditory attention decoding for neuroadaptive hearing support for smart healthcare\",\"authors\":\"Ihtiram Raza Khan ,&nbsp;Sheng-Lung Peng ,&nbsp;Rupali Mahajan ,&nbsp;Rajesh Dey\",\"doi\":\"10.1016/j.neuri.2025.100222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.</div></div><div><h3>Methods</h3><div>To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.</div></div><div><h3>Results</h3><div>Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.</div></div><div><h3>Conclusions</h3><div>These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100222\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000378\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

选择性听觉注意在多说话的环境中,大脑专注于特定说话者的能力在听觉或神经障碍的个体中经常受到损害。虽然利用脑电图进行听觉注意解码(AAD)在检测注意焦点方面显示出前景,但现有模型主要利用时间或频谱特征,往往忽略了时间、空间和频率之间的协同关系。这种限制大大降低了解码的准确性,特别是在短决策窗口中,这对于神经导向助听器等实时应用至关重要。本研究旨在充分利用脑电图的多维特征,提高短窗口AAD的性能。为了解决这个问题,我们提出了一种新的神经框架TSF-AADNet,它使用双分支架构和先进的基于注意力的融合技术集成了时空和频率空间特征。结果在KULeuven和DTU数据集上测试,TSF-AADNet在0.1秒窗口下的准确率分别达到91.8%和81.1%,比目前最先进的准确率高出7.99%。这些结果证明了该模型在精确、实时的注意力跟踪听力障碍诊断和下一代神经适应性听觉假肢方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Short-window EEG-based auditory attention decoding for neuroadaptive hearing support for smart healthcare

Background

Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.

Methods

To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.

Results

Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.

Conclusions

These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信