Bin Lu, Fuwang Wang, Junxiang Chen, Guilin Wen, Changchun Hua, Rongrong Fu
{"title":"Dynamic Hierarchical Convolutional Attention Network for Recognizing Motor Imagery Intention.","authors":"Bin Lu, Fuwang Wang, Junxiang Chen, Guilin Wen, Changchun Hua, Rongrong Fu","doi":"10.1109/TCYB.2025.3549583","DOIUrl":null,"url":null,"abstract":"<p><p>The neural activity patterns of localized brain regions are crucial for recognizing brain intentions. However, existing electroencephalogram (EEG) decoding models, especially those based on deep learning, predominantly focus on global spatial features, neglecting valuable local information, potentially leading to suboptimal performance. Therefore, this study proposed a dynamic hierarchical convolutional attention network (DH-CAN) that comprehensively learned discriminative information from both global and local spatial domains, as well as from time-frequency domains in EEG signals. Specifically, a multiscale convolutional block was designed to dynamically capture time-frequency information. The channels of EEG signals were mapped to different brain regions based on motor imagery neural activity patterns. The spatial features, both global and local, were then hierarchically extracted to fully exploit the discriminative information. Furthermore, regional connectivity was established using a graph attention network, incorporating it into the local spatial features. Particularly, this study shared network parameters between symmetrical brain regions to better capture asymmetrical motor imagery patterns. Finally, the learned multilevel features were integrated through a high-level fusion layer. Extensive experimental results on two datasets demonstrated that the proposed model performed excellently across multiple evaluation metrics, exceeding existing benchmark methods. These findings suggested that the proposed model offered a novel perspective for EEG decoding research.</p>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"PP ","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TCYB.2025.3549583","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The neural activity patterns of localized brain regions are crucial for recognizing brain intentions. However, existing electroencephalogram (EEG) decoding models, especially those based on deep learning, predominantly focus on global spatial features, neglecting valuable local information, potentially leading to suboptimal performance. Therefore, this study proposed a dynamic hierarchical convolutional attention network (DH-CAN) that comprehensively learned discriminative information from both global and local spatial domains, as well as from time-frequency domains in EEG signals. Specifically, a multiscale convolutional block was designed to dynamically capture time-frequency information. The channels of EEG signals were mapped to different brain regions based on motor imagery neural activity patterns. The spatial features, both global and local, were then hierarchically extracted to fully exploit the discriminative information. Furthermore, regional connectivity was established using a graph attention network, incorporating it into the local spatial features. Particularly, this study shared network parameters between symmetrical brain regions to better capture asymmetrical motor imagery patterns. Finally, the learned multilevel features were integrated through a high-level fusion layer. Extensive experimental results on two datasets demonstrated that the proposed model performed excellently across multiple evaluation metrics, exceeding existing benchmark methods. These findings suggested that the proposed model offered a novel perspective for EEG decoding research.
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.