{"title":"Neuroanatomy-Informed Brain-Machine Hybrid Intelligence for Robust Acoustic Target Detection.","authors":"Jianting Shi, Jiaqi Wang, Weijie Fei, Aberham Genetu Feleke, Luzheng Bi","doi":"10.34133/cbsystems.0438","DOIUrl":null,"url":null,"abstract":"<p><p>Sound target detection (STD) plays a critical role in modern acoustic sensing systems. However, existing automated STD methods show poor robustness and limited generalization, especially under low signal-to-noise ratio (SNR) conditions or when processing previously unencountered sound categories. To overcome these limitations, we first propose a brain-computer interface (BCI)-based STD method that utilizes neural responses to auditory stimuli. Our approach features the Triple-Region Spatiotemporal Dynamics Attention Network (Tri-SDANet), an electroencephalogram (EEG) decoding model incorporating neuroanatomical priors derived from EEG source analysis to enhance decoding accuracy and provide interpretability in complex auditory scenes. Recognizing the inherent limitations of stand-alone BCI systems (notably their high false alarm rates), we further develop an adaptive confidence-based brain-machine fusion strategy that intelligently combines decisions from both the BCI and conventional acoustic detection models. This hybrid approach effectively merges the complementary strengths of neural perception and acoustic feature learning. We validate the proposed method through experiments with 16 participants. Experimental results demonstrate that the Tri-SDANet achieves state-of-the-art performance in neural decoding under complex acoustic conditions. Moreover, the hybrid system maintains reliable detection performance at low SNR levels while exhibiting remarkable generalization to unseen target classes. In addition, source-level EEG analysis reveals distinct brain activation patterns associated with target perception, offering neuroscientific validation for our model design. This work pioneers a neuro-acoustic fusion paradigm for robust STD, offering a generalizable solution for real-world applications through the integration of noninvasive neural signals with artificial intelligence.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"6 ","pages":"0438"},"PeriodicalIF":18.1000,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12531490/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyborg and bionic systems (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/cbsystems.0438","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Sound target detection (STD) plays a critical role in modern acoustic sensing systems. However, existing automated STD methods show poor robustness and limited generalization, especially under low signal-to-noise ratio (SNR) conditions or when processing previously unencountered sound categories. To overcome these limitations, we first propose a brain-computer interface (BCI)-based STD method that utilizes neural responses to auditory stimuli. Our approach features the Triple-Region Spatiotemporal Dynamics Attention Network (Tri-SDANet), an electroencephalogram (EEG) decoding model incorporating neuroanatomical priors derived from EEG source analysis to enhance decoding accuracy and provide interpretability in complex auditory scenes. Recognizing the inherent limitations of stand-alone BCI systems (notably their high false alarm rates), we further develop an adaptive confidence-based brain-machine fusion strategy that intelligently combines decisions from both the BCI and conventional acoustic detection models. This hybrid approach effectively merges the complementary strengths of neural perception and acoustic feature learning. We validate the proposed method through experiments with 16 participants. Experimental results demonstrate that the Tri-SDANet achieves state-of-the-art performance in neural decoding under complex acoustic conditions. Moreover, the hybrid system maintains reliable detection performance at low SNR levels while exhibiting remarkable generalization to unseen target classes. In addition, source-level EEG analysis reveals distinct brain activation patterns associated with target perception, offering neuroscientific validation for our model design. This work pioneers a neuro-acoustic fusion paradigm for robust STD, offering a generalizable solution for real-world applications through the integration of noninvasive neural signals with artificial intelligence.