Mengxue Pang , Lina Zhou , Xueying Yao , Jun Yang , Jinshan Zhang , Yining Zhang , Limei Zhang , Lishan Qiao
{"title":"GoFormer: A GoLPP inspired transformer for functional brain graph learning and classification","authors":"Mengxue Pang , Lina Zhou , Xueying Yao , Jun Yang , Jinshan Zhang , Yining Zhang , Limei Zhang , Lishan Qiao","doi":"10.1016/j.neunet.2025.108081","DOIUrl":null,"url":null,"abstract":"<div><div>Graph has a great potential in modelling the complex relationship among data, and learning a high-quality graph usually plays a critical role in many downstream tasks. In 2010, we proposed the graph-optimized locality preserving projections (GoLPP) that was the first work to learn graphs adaptively with the dimensionality reduction task, exhibiting a better performance than the methods based on predefined graphs. Recently, the graph learning is re-highlighted partially due to the popularity of Transformer that leverages the self-attention mechanism to model the relationship between tokens by an updatable graph. Despite its great success, Transformer has a weak inductive bias and needs to be trained on large-scale datasets. For some practical scenarios such as intelligent medicine, however, it is difficult to collect sufficient data to support the training of Transformer. By revisiting GoLPP, we have an interesting finding that its iterative process between the graph and projection matrix precisely corresponds to the working mechanism of self-attention modules in Transformer, which inspires us to design a novel method, namely GoFormer, towards getting the best from both worlds. Specifically, GoFormer not only inherits the power of Transformer for handling the sequence data in an end-to-end form, but also balances the parsimonious principle by integrating the parameter updating and sharing mechanism implicitly involved in GoLPP. Compared with Transformer, GoFormer can mitigate the risk of overfitting and has a better interpretability for medical applications. To evaluate its effectiveness, we use GoFormer to learn and classify brain graphs based on functional magnetic resonance imaging (fMRI) data for the early diagnosis of neurological disorders. Experimental results demonstrate that GoFormer outperforms the baseline and state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"193 ","pages":"Article 108081"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802500961X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph has a great potential in modelling the complex relationship among data, and learning a high-quality graph usually plays a critical role in many downstream tasks. In 2010, we proposed the graph-optimized locality preserving projections (GoLPP) that was the first work to learn graphs adaptively with the dimensionality reduction task, exhibiting a better performance than the methods based on predefined graphs. Recently, the graph learning is re-highlighted partially due to the popularity of Transformer that leverages the self-attention mechanism to model the relationship between tokens by an updatable graph. Despite its great success, Transformer has a weak inductive bias and needs to be trained on large-scale datasets. For some practical scenarios such as intelligent medicine, however, it is difficult to collect sufficient data to support the training of Transformer. By revisiting GoLPP, we have an interesting finding that its iterative process between the graph and projection matrix precisely corresponds to the working mechanism of self-attention modules in Transformer, which inspires us to design a novel method, namely GoFormer, towards getting the best from both worlds. Specifically, GoFormer not only inherits the power of Transformer for handling the sequence data in an end-to-end form, but also balances the parsimonious principle by integrating the parameter updating and sharing mechanism implicitly involved in GoLPP. Compared with Transformer, GoFormer can mitigate the risk of overfitting and has a better interpretability for medical applications. To evaluate its effectiveness, we use GoFormer to learn and classify brain graphs based on functional magnetic resonance imaging (fMRI) data for the early diagnosis of neurological disorders. Experimental results demonstrate that GoFormer outperforms the baseline and state-of-the-art methods.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.