{"title":"基于短窗口脑电图的听觉注意解码用于智能医疗的神经适应性听力支持","authors":"Ihtiram Raza Khan , Sheng-Lung Peng , Rupali Mahajan , Rajesh Dey","doi":"10.1016/j.neuri.2025.100222","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.</div></div><div><h3>Methods</h3><div>To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.</div></div><div><h3>Results</h3><div>Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.</div></div><div><h3>Conclusions</h3><div>These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100222"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Short-window EEG-based auditory attention decoding for neuroadaptive hearing support for smart healthcare\",\"authors\":\"Ihtiram Raza Khan , Sheng-Lung Peng , Rupali Mahajan , Rajesh Dey\",\"doi\":\"10.1016/j.neuri.2025.100222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.</div></div><div><h3>Methods</h3><div>To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.</div></div><div><h3>Results</h3><div>Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.</div></div><div><h3>Conclusions</h3><div>These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100222\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000378\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Short-window EEG-based auditory attention decoding for neuroadaptive hearing support for smart healthcare
Background
Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.
Methods
To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.
Results
Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.
Conclusions
These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.
Neuroscience informaticsSurgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology