Lina Zhou , Xiao Jiang , Mengxue Pang , Jinshan Zhang , Shuai Zhang , Chris Nugent , Lishan Qiao
{"title":"GSAformer: Group sparse attention transformer for functional brain network analysis","authors":"Lina Zhou , Xiao Jiang , Mengxue Pang , Jinshan Zhang , Shuai Zhang , Chris Nugent , Lishan Qiao","doi":"10.1016/j.neunet.2025.107891","DOIUrl":null,"url":null,"abstract":"<div><div>Functional brain network (FBN) analysis based on fMRI has proven effective for neurological/mental disorder classification. Traditional methods usually separate the FBN construction from the subsequent classification tasks, resulting in a suboptimal solution. Recently, transformers, known for their attention mechanisms, have shown strong performance in various tasks, including brain disorder classification. However, existing methods treat subjects independently, limiting the capture of their shared patterns. To address these issues, we propose GSAformer, a group sparse attention-based model for brain disorder diagnosis. Specifically, we first construct brain connectivity matrices for subjects using Pearson’s correlation, and then incorporate group sparse prior into the transformer to explicitly model inter-subject relationships. Group sparsity is applied across attention matrices to reduce parameters, improve the generalization, and enhance the interpretability. A maximum mean discrepancy (MMD) constraint is also introduced to ensure consistency between the learned attention matrices and the group sparse brain networks. Our framework integrates population-level prior knowledge, and supports end-to-end adaptive learning, while maintaining computational complexity on par with the standard Transformer and demonstrating enhanced capability in capturing group sparse topological structures among population. We evaluate the GSAformer on three public datasets for brain disorder classification. The classification performance of the proposed method is improved by 3.8%, 4.1% and 14.7% on the three datasets, respectively, compared with the standard Transformer.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"192 ","pages":"Article 107891"},"PeriodicalIF":6.3000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025007725","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Functional brain network (FBN) analysis based on fMRI has proven effective for neurological/mental disorder classification. Traditional methods usually separate the FBN construction from the subsequent classification tasks, resulting in a suboptimal solution. Recently, transformers, known for their attention mechanisms, have shown strong performance in various tasks, including brain disorder classification. However, existing methods treat subjects independently, limiting the capture of their shared patterns. To address these issues, we propose GSAformer, a group sparse attention-based model for brain disorder diagnosis. Specifically, we first construct brain connectivity matrices for subjects using Pearson’s correlation, and then incorporate group sparse prior into the transformer to explicitly model inter-subject relationships. Group sparsity is applied across attention matrices to reduce parameters, improve the generalization, and enhance the interpretability. A maximum mean discrepancy (MMD) constraint is also introduced to ensure consistency between the learned attention matrices and the group sparse brain networks. Our framework integrates population-level prior knowledge, and supports end-to-end adaptive learning, while maintaining computational complexity on par with the standard Transformer and demonstrating enhanced capability in capturing group sparse topological structures among population. We evaluate the GSAformer on three public datasets for brain disorder classification. The classification performance of the proposed method is improved by 3.8%, 4.1% and 14.7% on the three datasets, respectively, compared with the standard Transformer.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.