Neural NetworksPub Date : 2025-09-16DOI: 10.1016/j.neunet.2025.108119
Yu Xie , Yu Chang , Ming Li , A.K. Qin , Xialei Zhang
{"title":"AutoSGRL: Automated framework construction for self-supervised graph representation learning","authors":"Yu Xie , Yu Chang , Ming Li , A.K. Qin , Xialei Zhang","doi":"10.1016/j.neunet.2025.108119","DOIUrl":"10.1016/j.neunet.2025.108119","url":null,"abstract":"<div><div>Automated machine learning (AutoML) is a promising solution for building a machine learning framework without human assistance and has attracted significant attention throughout the computational intelligence research community. Although there has been an emerging interest in graph neural architecture search, current research focuses on the specific design of semi-supervised or supervised graph neural networks. Motivated by this, we propose a novel method that enables the automatic construction of flexible self-supervised graph representation learning frameworks for the first time as far as we know, referred to as AutoSGRL. Based on existing self-supervised graph contrastive learning methods, AutoSGRL establishes a framework search space for self-supervised graph representation learning, which encompasses data augmentation strategies and proxy tasks for constructing graph contrastive learning frameworks, and the hyperparameters required for model training. Then, we implement an automatic search engine based on genetic algorithms, which constructs multiple self-supervised graph representation learning frameworks as the initial population. By simulating the process of biological evolution including selection, crossover, and mutation, the search engine iteratively evolves the population to identify high-performed frameworks and optimal hyperparameters. Empirical studies demonstrate that our AutoSGRL achieves comparative or even better performance than state-of-the-art manual-designed self-supervised graph representation learning methods and semi-supervised graph neural architecture search methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108119"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the theoretical expressive power of graph transformers for solving graph problems","authors":"Giannis Nikolentzos , Dimitrios Kelesis , Michalis Vazirgiannis","doi":"10.1016/j.neunet.2025.108112","DOIUrl":"10.1016/j.neunet.2025.108112","url":null,"abstract":"<div><div>In recent years, Transformers have become the dominant neural architecture in the fields of natural language processing and computer vision. The generalization of Transformers to graphs, so-called Graph Transformers, have recently emerged as a promising alternative to the successful message passing Graph Neural Networks (MPNNs). While the expressive power of MPNNs has been intensively studied in the past years, that of Graph Transformers is still underexplored. Existing results mostly rely on the employed structural/positional encodings and not on the pure architecture itself. However, gaining an understanding of the strengths and limitations of Graph Transformers would be very useful both for the scientific community and the practitioners. In this paper, we derive a connection between Graph Transformers and the <span>Congested clique</span>, a popular model in distributed computing. This connection allows us to translate theoretical results for different graph problems from the latter to the former. We show that under certain conditions, Graph Transformers with depth 2 are Turing universal. We also show that there exist Graph Transformers that can solve problems which cannot be solved by MPNNs. We empirically investigate whether Graph Transformers and MPNNs with depth 2 can solve graph problems on some molecular datasets. Our results demonstrate that Graph Transformers can generally address the underlying tasks, while MPNNs are incapable of learning any information about the graph.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108112"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-15DOI: 10.1016/j.neunet.2025.108109
Xinrong Yang, Haitao Li
{"title":"State-flipped control design for the stabilization of probabilistic Boolean control networks","authors":"Xinrong Yang, Haitao Li","doi":"10.1016/j.neunet.2025.108109","DOIUrl":"10.1016/j.neunet.2025.108109","url":null,"abstract":"<div><div>Stabilization is a fundamental issue in modern control theory. In the past decades, significant efforts have been invested in deriving necessary and sufficient conditions for verifying the global stabilization of probabilistic Boolean control networks (PBCNs). However, systematic methods and general criteria for exploring the local stabilization and determining the domain of attraction of PBCNs are still lacking in the existing literature. Motivated by this research gap, this paper investigates the local state feedback stabilization of PBCNs, including local finite-time state feedback stabilization with probability one (FTSFS) and local state feedback stabilization in distribution (SFSD). Firstly, a sequence of reachable sets with probability one is constructed, based on which, the largest domain of attraction is derived for the FTSFS of PBCNs by designing the state feedback controllers. Secondly, by constructing a sequence of reachable sets with positive probability, the largest domain of attraction is determined for the SFSD of PBCNs. Finally, when the largest domain of attraction is not the whole state space, the state-flipped control is designed to achieve the global FTSFS or SFSD of PBCNs via the largest domain of attraction.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108109"},"PeriodicalIF":6.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-15DOI: 10.1016/j.neunet.2025.108092
Lars Keuninckx , Matthias Hartmann , Paul Detterer , Ali Safa , Wout Mommen , Ilja Ocket
{"title":"On training networks of monostable multivibrator timer neurons","authors":"Lars Keuninckx , Matthias Hartmann , Paul Detterer , Ali Safa , Wout Mommen , Ilja Ocket","doi":"10.1016/j.neunet.2025.108092","DOIUrl":"10.1016/j.neunet.2025.108092","url":null,"abstract":"<div><div>An important bottleneck in present-day neuromorphic hardware is its reliance on synaptic addition, which limits the achievable degree of parallelization and thus processing throughput. We present a network of monostable multivibrator timers, whose synaptic inputs are simply OR-ed together, thus mitigating the synaptic addition bottleneck. Monostable multivibrators are simple timers which are easily implemented using counters in digital hardware and can be interpreted as non biologically-inspired spiking neurons. We show how fully binarized event-driven recurrent networks of monostable multivibrators can be trained to solve classification tasks. Our training algorithm resolves temporally overlapping input events. We demonstrate our approach on the MNIST handwritten digits, Google Soli radar gestures, IBM DVS128 gestures and Yin-Yang classification tasks. The estimated energy consumption for the MNIST handwritten digits task, excluding the final linear readout layer, is 855pJ per inference for a test accuracy of <span><math><mrow><mn>98.61</mn><mspace></mspace><mo>%</mo></mrow></math></span> for a reconfigurable network of 500 units, when mapped to the TSMC HPC+ <span><math><mrow><mn>28</mn><mspace></mspace><mrow><mi>nm</mi></mrow></mrow></math></span> process.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108092"},"PeriodicalIF":6.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-15DOI: 10.1016/j.neunet.2025.108111
Giulia Salzano , Paolo Paradisi , Enrico Cataldo
{"title":"A biologically plausible model of astrocyte-neuron networks in random and hub-driven connectivity","authors":"Giulia Salzano , Paolo Paradisi , Enrico Cataldo","doi":"10.1016/j.neunet.2025.108111","DOIUrl":"10.1016/j.neunet.2025.108111","url":null,"abstract":"<div><div>Recent research studies in brain neural networks are highlighting the involvement of glial cells, in particular astrocytes, in synaptic modulation, memory formation, and neural synchronization, a role that has often been overlooked. Thus, theoretical models have begun incorporating astrocytes to better understand their functional impact. Additionally, the structural organization of neuron-neuron, astrocyte-neuron and astrocyte-astrocyte connections plays a crucial role in network dynamics.</div><div>Starting from a recently published astrocyte-neuron network model with neuron-neuron random connectivity, we provide an extensive evaluation of this same model, focusing on astrocytic dynamics, neuron-astrocyte connectivity, and spatial distribution of inhibitory neurons. We propose refinements to the model with the aim of improving the biological plausibility of the above described characteristics of the model. To assess the interplay between astrocytes and network topology, we compare four configurations: neural networks with and without astrocytes, each under random and hub-driven connectivity. Simulations are conducted using the Brian2 simulator, providing insights into how astrocytes and structural heterogeneity jointly influence neural dynamics. Our findings contribute to a deeper understanding of neuron-glia interactions and the impact of network topology on astrocyte-neuron network dynamics. In particular, while finding an expected decrease of neural firing activity due to astrocyte calcium dynamics, we also found that hub-driven topology trigger a much higher firing rate with respect to the random topology, even having this last one a much higher number of neuron-neuron connections.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108111"},"PeriodicalIF":6.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-15DOI: 10.1016/j.neunet.2025.108110
Qiulei Han , Hongbiao Ye , Miaoshui Bai , Lili Wang , Yan Sun , Ze Song , Jian Zhao , Lijuan Shi , Zhejun Kuang
{"title":"MAN-GNN: An interpretable biomarker architecture for neurodevelopmental disorders","authors":"Qiulei Han , Hongbiao Ye , Miaoshui Bai , Lili Wang , Yan Sun , Ze Song , Jian Zhao , Lijuan Shi , Zhejun Kuang","doi":"10.1016/j.neunet.2025.108110","DOIUrl":"10.1016/j.neunet.2025.108110","url":null,"abstract":"<div><div>Neurodevelopmental disorders exhibit highly similar behavioral characteristics in clinical assessments, heavily relying on subjective behavioral reports, leading to insufficient understanding of the neurobiological mechanisms behind inter-patient heterogeneity and symptom overlap between diseases. To address this issue, this study proposes a graph neural network framework that integrates neuroimaging data, focusing on three key problems: Firstly, enhance the nonlinear features in brain neural activity by introducing the Neurodynamics Rössler system. Transform raw static neural signals into simulated signals with nonlinear, temporal, and dynamic features, thereby more accurately reflecting the process of brain neural activity. Secondly, improve feature discrimination by integrating the spatial adjacency characteristics of local brain regions with the topological structure information of the global brain network to highlight key features. Thirdly, improve noise resistance and generalization ability. Introducing adaptive controllers and cross-site adversarial learning mechanisms, the interference of heterogeneous noise is effectively reduced. This study conducted experimental validation on data from neurodevelopmental disorders such as ADHD and ASD. The results indicate that this framework not only has advantages in classification accuracy but also possesses good interpretability, making it a promising tool for imaging biomarker research and auxiliary diagnosis.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108110"},"PeriodicalIF":6.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-14DOI: 10.1016/j.neunet.2025.108099
Yaping Bai , Jinghua Li , Dehui Kong , Suqiao Yang , Baocai Yin
{"title":"EKDSC: Long-tailed recognition based on expert knowledge distillation for specific categories","authors":"Yaping Bai , Jinghua Li , Dehui Kong , Suqiao Yang , Baocai Yin","doi":"10.1016/j.neunet.2025.108099","DOIUrl":"10.1016/j.neunet.2025.108099","url":null,"abstract":"<div><div>In the field of long-tail visual recognition, the imbalance in data distribution leads to a significant performance gap between head and tail classes. Improving the tail-class performance and alleviating the decline in head class are two critical questions. Although many methods have proposed solutions for the former, most of them fall short in the latter. Introducing additional knowledge is a novel view to address the problem, however, how to attain useful knowledge and further transfer the knowledge to the target model is the core. This paper proposes a novel method called Expert Knowledge Distillation for Specific Categories (EKDSC). Firstly, we propose a kind of well-trained teacher model ensuring each expert concentrates on its specialized field while being less affected by other interference. Furthermore, the teacher model including three categories of experts: head, mid, and tail classes, is utilized to distill their specialized knowledge to the student model. Experimental results demonstrate that EKDSC effectively improves the accuracy of tail classes, and mitigates the common decreases of head classes’ performance. Our proposed method achieves a high accuracy, exceeding the current state-of-the-art (SOTA) by 1–5 % on benchmark datasets including the small-scale CIFAR-10 LT and CIFAR-100 LT. Furthermore, it demonstrates outstanding performance on large-scale datasets such as ImageNet-LT, iNaturalist 2018, and Places-LT.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108099"},"PeriodicalIF":6.3,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-13DOI: 10.1016/j.neunet.2025.108105
Jiyao Li , Mingze Ni , Yongshun Gong , Wei Liu
{"title":"Deceiving question-answering models: A hybrid word-level adversarial approach","authors":"Jiyao Li , Mingze Ni , Yongshun Gong , Wei Liu","doi":"10.1016/j.neunet.2025.108105","DOIUrl":"10.1016/j.neunet.2025.108105","url":null,"abstract":"<div><div>Deep learning underpins most of the currently advanced natural language processing (NLP) tasks such as textual classification, neural machine translation (NMT), abstractive summarization and question-answering (QA). However, the robustness of the models, particularly QA models, against adversarial attacks is a critical concern that remains insufficiently explored. This paper introduces QA-Attack (Question Answering Attack), a novel word-level adversarial strategy that fools QA models. Our attention-based attack exploits the customized attention mechanism and deletion ranking strategy to identify and target specific words within contextual passages. It creates deceptive inputs by carefully choosing and substituting synonyms, preserving grammatical integrity while misleading the model to produce incorrect responses. Our approach demonstrates versatility across various question types, particularly when dealing with extensive long textual inputs. Extensive experiments on multiple benchmark datasets demonstrate that QA-Attack successfully deceives baseline QA models and surpasses existing adversarial techniques regarding success rate, semantics changes, BLEU score, fluency and grammar error rate.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108105"},"PeriodicalIF":6.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-13DOI: 10.1016/j.neunet.2025.108116
Xiang Li , Long Lan , Husam Lahza , Shaowu Yang , Shuihua Wang , Yong Liang , Hudan Pan , Wenjing Yang , Hengzhu Liu , Yudong Zhang
{"title":"LCA-Med: A lightweight cross-modal adaptive feature processing module for detecting imbalanced medical image distribution","authors":"Xiang Li , Long Lan , Husam Lahza , Shaowu Yang , Shuihua Wang , Yong Liang , Hudan Pan , Wenjing Yang , Hengzhu Liu , Yudong Zhang","doi":"10.1016/j.neunet.2025.108116","DOIUrl":"10.1016/j.neunet.2025.108116","url":null,"abstract":"<div><div>Data distribution discrepancy across datasets is one of the major obstacles hindering the improvement of the accuracy of cross-domain adaptive detection of medical images. To address this challenge, we propose a novel lightweight cross-modal adaptive detection module named LCA-Med (LCaM). The proposed module boasts a lightweight structure and a minimalistic parameter count, thereby facilitating its integration into the anterior segment of a diverse array of foundational and downstream networks. It is adept at serving as a feature preprocessor, proficiently extracting pertinent information regrading pathologies from a array of images (image modality) produced through varied medical imaging techniques, all guided by the input of prompts (text modality). We also propose a novel cross-modal medical image adaptive detection method, LCA-Med CNX (LCaM-CNX), and a novel cross-domain adaptive detection training paradigm that incorporates generated dataset groups, an attention module, and a meta-heuristic algorithm. Experimental results on six medical image datasets compared with ten state-of-the-art methods demonstrate that the LCaM-CNX trained following the proposed paradigm achieves the best performance on five datasets and competitive performance on the other dataset. Notably, our method outperforms the state-of-the-art methods more when the data distribution is more imbalanced.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108116"},"PeriodicalIF":6.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-13DOI: 10.1016/j.neunet.2025.108118
Bo Liu , Yudong Zhang , Shuihua Wang , Siyue Li , Jin Hong
{"title":"DGSSA: Domain generalization with structural and stylistic augmentation for retinal vessel segmentation","authors":"Bo Liu , Yudong Zhang , Shuihua Wang , Siyue Li , Jin Hong","doi":"10.1016/j.neunet.2025.108118","DOIUrl":"10.1016/j.neunet.2025.108118","url":null,"abstract":"<div><div>Retinal vascular morphology plays a crucial role in diagnosing diseases such as diabetes, glaucoma, and hypertension, making accurate segmentation of retinal vessels essential for early intervention. Traditional segmentation methods assume that training and testing data share similar distributions, which can lead to poor performance on unseen domains due to domain shifts caused by variations in imaging devices and patient demographics. This paper presents a novel approach, DGSSA, for retinal vessel image segmentation that enhances model generalization by combining structural and stylistic augmentation strategies. We utilize a space colonization algorithm to generate diverse vascular-like structures that closely mimic actual retinal vessels, which are then used to generate pseudo-retinal images with an improved Pix2Pix model, allowing the segmentation model to learn a broader range of structure distributions. Additionally, we utilize PixMix to apply random photometric augmentations and introduce uncertainty perturbations, enriching the stylistic diversity of fundus images and further improving the model’s robustness and generalization across varying imaging conditions. Our framework, which employs a DeepLabv3+ model with a MobileNetV2 backbone as its segmentation network, has been rigorously evaluated on four challenging datasets—DRIVE, CHASEDB1, HRF, and STARE—achieving Dice Similarity Coefficient (DSC) of 78.45%, 78.62%, 72.66% and 82.17%, respectively, with an average DSC of 77.98%. These results demonstrate that our method surpasses existing approaches, validating its effectiveness and highlighting its potential for clinical application in automated retinal vessel analysis.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108118"},"PeriodicalIF":6.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}