Bingfeng He;Li Zhu;Junhua Li;Andrzej Cichocki;Wanzeng Kong
{"title":"Dual-Brain EEG Decoding for Target Detection via Joint Learning in Shared and Private Spaces","authors":"Bingfeng He;Li Zhu;Junhua Li;Andrzej Cichocki;Wanzeng Kong","doi":"10.1109/LSP.2025.3601978","DOIUrl":null,"url":null,"abstract":"Hyperscanning enables simultaneous electroencephalography (EEG) recording from multiple individuals, facilitating collaborative brain activity to reduce individual biases and enhance the reliability of decision-making. The decoding of such collaborative paradigm tasks has traditionally relied solely on simple fusion methods based on each individual brain activity, without incorporating cross-brain coupling information. Inspired by social interaction studies on enhanced inter-brain synchrony in collaborative tasks using hyperscanning, we propose a joint learning framework for dual-brain target detection that integrates a shared space construction module and shared feature-guided module. The shared space construction module incorporates brain-to-brain coupling analysis to identify cross-brain synchrony, and further integrates shared and private features through a multi-head fusion mechanism for joint representation learning in shared feature-guided module. Experimental results show an average 10% improvement in balanced accuracy across 12 participant groups compared to traditional single-brain approaches, with some groups achieving up to a 5% gain over state-of-the-art (SOTA) methods. Notably, higher-performing groups exhibit stronger inter-brain coupling and more synchronized target-related responses. These findings advance the development of collaborative brain-computer interface (BCI) systems for more robust and effective target detection.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3500-3504"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11134569/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Hyperscanning enables simultaneous electroencephalography (EEG) recording from multiple individuals, facilitating collaborative brain activity to reduce individual biases and enhance the reliability of decision-making. The decoding of such collaborative paradigm tasks has traditionally relied solely on simple fusion methods based on each individual brain activity, without incorporating cross-brain coupling information. Inspired by social interaction studies on enhanced inter-brain synchrony in collaborative tasks using hyperscanning, we propose a joint learning framework for dual-brain target detection that integrates a shared space construction module and shared feature-guided module. The shared space construction module incorporates brain-to-brain coupling analysis to identify cross-brain synchrony, and further integrates shared and private features through a multi-head fusion mechanism for joint representation learning in shared feature-guided module. Experimental results show an average 10% improvement in balanced accuracy across 12 participant groups compared to traditional single-brain approaches, with some groups achieving up to a 5% gain over state-of-the-art (SOTA) methods. Notably, higher-performing groups exhibit stronger inter-brain coupling and more synchronized target-related responses. These findings advance the development of collaborative brain-computer interface (BCI) systems for more robust and effective target detection.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.