Neural NetworksPub Date : 2025-05-27DOI: 10.1016/j.neunet.2025.107486
Amanda Camacho Novaes de Oliveira, Daniel Ratton Figueiredo
{"title":"Optimizing connectivity through network gradients for Restricted Boltzmann Machines","authors":"Amanda Camacho Novaes de Oliveira, Daniel Ratton Figueiredo","doi":"10.1016/j.neunet.2025.107486","DOIUrl":"10.1016/j.neunet.2025.107486","url":null,"abstract":"<div><div>Leveraging sparse networks to connect successive layers in deep neural networks has recently been shown to provide benefits to large-scale state-of-the-art models. However, network connectivity also plays a significant role in the learning performance of shallow networks, such as the classic Restricted Boltzmann Machine (RBM). Efficiently finding sparse connectivity patterns that improve the learning performance of shallow networks is a fundamental problem. While recent principled approaches explicitly include network connections as model parameters that must be optimized, they often rely on explicit penalization or network sparsity as a hyperparameter. This work presents the Network Connectivity Gradients (NCG), an optimization method to find optimal connectivity patterns for RBMs. NCG leverages the idea of network gradients: given a specific connection pattern, it determines the gradient of every possible connection and uses the gradient to drive a continuous connection strength parameter that in turn is used to determine the connection pattern. Thus, learning RBM parameters and learning network connections is truly jointly performed, albeit with different learning rates, and without changes to the model’s classic energy-based objective function. The proposed method is applied to the MNIST and other data sets showing that better RBM models are found for the benchmark tasks of sample generation and classification. Results also show that NCG is robust to network initialization and is capable of both adding and removing network connections while learning.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107486"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-27DOI: 10.1016/j.neunet.2025.107612
Ming Gu , Gaoming Yang , Zhuonan Zheng , Meihan Liu , Haishuai Wang , Jiawei Chen , Sheng Zhou , Jiajun Bu
{"title":"Frequency Self-Adaptation Graph Neural Network for Unsupervised Graph Anomaly Detection","authors":"Ming Gu , Gaoming Yang , Zhuonan Zheng , Meihan Liu , Haishuai Wang , Jiawei Chen , Sheng Zhou , Jiajun Bu","doi":"10.1016/j.neunet.2025.107612","DOIUrl":"10.1016/j.neunet.2025.107612","url":null,"abstract":"<div><div>Unsupervised Graph Anomaly Detection (UGAD) seeks to identify abnormal patterns in graphs without relying on labeled data. Among existing UGAD methods, Graph Neural Networks (GNNs) have played a critical role in learning effective representation for detection by filtering low-frequency graph signals. However, the presence of anomalies can shift the frequency band of graph signals toward higher frequencies, thereby violating the fundamental assumptions underlying GNNs and anomaly detection frameworks. To address this challenge, the design of novel graph filters has garnered significant attention, with recent approaches leveraging anomaly labels in a semi-supervised manner. Nonetheless, the absence of anomaly labels in real-world scenarios has rendered these methods impractical, leaving the question of how to design effective filters in an unsupervised manner largely unexplored. To bridge this gap, we propose a novel <strong>F</strong>requency Self-<strong>A</strong>daptation <strong>G</strong>raph Neural Network for Unsupervised Graph <strong>A</strong>nomaly <strong>D</strong>etection (<strong>FAGAD</strong>). Specifically, FAGAD adaptively fuses signals across multiple frequency bands using full-pass signals as a reference. It is optimized via a self-supervised learning approach, enabling the generation of effective representations for unsupervised graph anomaly detection. Experimental results demonstrate that FAGAD achieves state-of-the-art performance on both artificially injected datasets and real-world datasets. The code and datasets are publicly available at <span><span>https://github.com/eaglelab-zju/FAGAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107612"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-27DOI: 10.1016/j.neunet.2025.107608
Dunwei Tu , Huiyu Yi , Tieyi Zhang , Ruotong Li , Furao Shen , Jian Zhao
{"title":"Embedding Space Allocation with Angle-Norm Joint Classifiers for few-shot class-incremental learning","authors":"Dunwei Tu , Huiyu Yi , Tieyi Zhang , Ruotong Li , Furao Shen , Jian Zhao","doi":"10.1016/j.neunet.2025.107608","DOIUrl":"10.1016/j.neunet.2025.107608","url":null,"abstract":"<div><div>Few-shot class-incremental learning (FSCIL) aims to continually learn new classes from only a few samples without forgetting previous ones, requiring intelligent agents to adapt to dynamic environments. FSCIL combines the characteristics and challenges of class-incremental learning and few-shot learning: (i) Current classes occupy the entire feature space, which is detrimental to learning new classes. (ii) The small number of samples in incremental rounds is insufficient for fully training. In existing mainstream virtual class methods, to address the challenge (i), they attempt to use virtual classes as placeholders. However, new classes may not necessarily align with the virtual classes. For challenge (ii), they replace trainable fully connected layers with Nearest Class Mean (NCM) classifiers based on cosine similarity, but NCM classifiers do not account for sample imbalance issues. To address these issues in previous methods, we propose the class-center guided embedding Space Allocation with Angle-Norm joint classifiers (SAAN) learning framework, which provides balanced space for all classes and leverages norm differences caused by sample imbalance to enhance classification criteria. Specifically, for challenge (i), SAAN divides the feature space into multiple subspaces and allocates a dedicated subspace for each session by guiding samples with the pre-set category centers. For challenge (ii), SAAN establishes a norm distribution for each class and generates angle-norm joint logits. Experiments demonstrate that SAAN can achieve state-of-the-art performance and it can be directly embedded into other SOTA methods as a plug-in, further enhancing their performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107608"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-27DOI: 10.1016/j.neunet.2025.107605
Zihan Weng , Yang Xiao , Peiyang Li , Chanlin Yi , Pouya Bashivan , Hailin Ma , Guang Yao , Yuan Lin , Fali Li , Dezhong Yao , Jingming Hou , Yangsong Zhang , Peng Xu
{"title":"Real-time fine finger motion decoding for transradial amputees with surface electromyography","authors":"Zihan Weng , Yang Xiao , Peiyang Li , Chanlin Yi , Pouya Bashivan , Hailin Ma , Guang Yao , Yuan Lin , Fali Li , Dezhong Yao , Jingming Hou , Yangsong Zhang , Peng Xu","doi":"10.1016/j.neunet.2025.107605","DOIUrl":"10.1016/j.neunet.2025.107605","url":null,"abstract":"<div><div>Advancements in human-machine interfaces (HMIs) are pivotal for enhancing rehabilitation technologies and improving the quality of life for individuals with limb loss. This paper presents a novel CNN-Transformer model for decoding continuous fine finger motions from surface electromyography (sEMG) signals by integrating the convolutional neural network (CNN) and Transformer architecture, focusing on applications for transradial amputees. This model leverages the strengths of both convolutional and Transformer architectures to effectively capture both local muscle activation patterns and global temporal dependencies within sEMG signals.</div><div>To achieve high-fidelity sEMG acquisition, we designed a flexible and stretchable epidermal array electrode sleeve (EAES) that conforms to the residual limb, ensuring comfortable long-term wear and robust signal capture, critical for amputees. Moreover, we presented a computer vision (CV) based multimodal data acquisition protocol that synchronizes sEMG recordings with video captures of finger movements, enabling the creation of a large, labeled dataset to train and evaluate the proposed model.</div><div>Given the challenges in acquiring reliable labeled data for transradial amputees, we adopted transfer learning and few-shot calibration to achieve fine finger motion decoding by leveraging datasets from non-amputated subjects. Extensive experimental results demonstrate the superior performance of the proposed model in various scenarios, including intra-session, inter-session, and inter-subject evaluations. Importantly, the system also exhibited promising zero-shot and few-shot learning capabilities for amputees, allowing for personalized calibration with minimal training data. The combined approach holds significant potential for advancing real-time, intuitive control of prostheses and other assistive technologies.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107605"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-27DOI: 10.1016/j.neunet.2025.107627
Xuechun Hu , Yu Xia , Zsófia Lendek , Jinde Cao , Radu-Emil Precup
{"title":"A novel dynamic prescribed performance fuzzy-neural backstepping control for PMSM under step load","authors":"Xuechun Hu , Yu Xia , Zsófia Lendek , Jinde Cao , Radu-Emil Precup","doi":"10.1016/j.neunet.2025.107627","DOIUrl":"10.1016/j.neunet.2025.107627","url":null,"abstract":"<div><div>In order to meet the performance requirements of permanent magnet synchronous motor (PMSM) systems with time-varying model parameters and input constraints under step load, this paper proposes a dynamic prescribed performance fuzzy-neural backstepping control approach. Firstly, a novel finite-time asymmetric dynamic prescribed performance function (FADPPF) is proposed to tackle the issues of exceeding predefined error, control singularity, and system instability that arise in the traditional prescribed performance function under load changes. To address model accuracy degradation and control quality deterioration caused by nonlinear time-varying parameters and input constraints in the PMSM system, a backstepping controller is designed by combining the speed function (SF), fuzzy neural network (FNN), and the proposed FADPPF. The FNN approximates nonlinear uncertain functions in the system model; the SF, as an error amplification mechanism, works together with FADPPF to ensure the transient and steady-state performance of the system. The stability of the devised control strategy is proved using Lyapunov analysis. Finally, simulation results demonstrate the dynamic self-adjusting ability and effectiveness of FADPPF under step load. In addition, the feasibility and superiority of the proposed control scheme are validated.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107627"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spurious reconstruction from brain activity","authors":"Ken Shirakawa , Yoshihiro Nagano , Misato Tanaka , Shuntaro C. Aoki , Yusuke Muraki , Kei Majima , Yukiyasu Kamitani","doi":"10.1016/j.neunet.2025.107515","DOIUrl":"10.1016/j.neunet.2025.107515","url":null,"abstract":"<div><div>Advances in brain decoding, particularly in visual image reconstruction, have sparked discussions about the societal implications and ethical considerations of neurotechnology. As reconstruction methods aim to recover visual experiences from brain activity and achieve prediction beyond training samples (zero-shot prediction), it is crucial to assess their capabilities and limitations to inform public expectations and regulations. Our case study of recent text-guided reconstruction methods, which leverage a large-scale dataset (Natural Scenes Dataset, NSD) and text-to-image diffusion models, reveals critical limitations in their generalizability, demonstrated by poor reconstructions on a different dataset. UMAP visualization of the text features from NSD images shows limited diversity with overlapping semantic and visual clusters between training and test sets. We identify that clustered training samples can lead to “output dimension collapse,” restricting predictable output feature dimensions. While diverse training data improves generalization over the entire feature space without requiring exponential scaling, text features alone prove insufficient for mapping to the visual space. Our findings suggest that the apparent realism in current text-guided reconstructions stems from a combination of classification into trained categories and inauthentic image generation (hallucination) through diffusion models, rather than genuine visual reconstruction. We argue that careful selection of datasets and target features, coupled with rigorous evaluation methods, is essential for achieving authentic visual image reconstruction. These insights underscore the importance of grounding interdisciplinary discussions in a thorough understanding of the technology’s current capabilities and limitations to ensure responsible development.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107515"},"PeriodicalIF":6.0,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-26DOI: 10.1016/j.neunet.2025.107604
Shouwen Wang , Qian Wan , Zihan Zhang , Zhigang Zeng
{"title":"Prompt-guided consistency learning for multi-label classification with incomplete labels","authors":"Shouwen Wang , Qian Wan , Zihan Zhang , Zhigang Zeng","doi":"10.1016/j.neunet.2025.107604","DOIUrl":"10.1016/j.neunet.2025.107604","url":null,"abstract":"<div><div>Addressing insufficient supervision and improving model generalization are essential for multi-label classification with incomplete annotations, <em>i.e.</em>, partial and single positive labels. Recent studies incorporate pseudo-labels to provide additional supervision and enhance model generalization. However, the noise in pseudo-labels generated by the model tends to accumulate, resulting in confirmation bias during training. Self-correction methods, commonly used approaches for mitigating confirmation bias, rely on model predictions but remain susceptible to confirmation bias caused by visual confusion, including both visual ambiguity and similarity. To reduce visual confusion, we propose a prompt-guided consistency learning (PGCL) framework designed for two incomplete labeling settings. Specifically, we introduce an intra-category supervised contrastive loss, which imposes consistency constraints on reliable positive class samples in the feature space of each category, rather than across the feature space of all categories, as in traditional inter-category supervised contrastive loss. Building on this, the distinction between true positive and visual confusion samples for each category is enhanced through label-level contrasting of the same category. Additionally, we develop a class-specific semantic decoupling module that leverages CLIP’s strong vision-language alignment capability, since the proposed contrastive loss requires high-quality label-level representations as contrastive samples. Extensive experimental results on multiple datasets demonstrate that our method can effectively address the problems of two incomplete labeling settings and achieve state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107604"},"PeriodicalIF":6.0,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144211831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-26DOI: 10.1016/j.neunet.2025.107681
Danfeng Zhao, Yanhao Chen, Wei Song, Qi He
{"title":"Cross-view self-supervised heterogeneous graph representation learning","authors":"Danfeng Zhao, Yanhao Chen, Wei Song, Qi He","doi":"10.1016/j.neunet.2025.107681","DOIUrl":"10.1016/j.neunet.2025.107681","url":null,"abstract":"<div><div>Heterogeneous graph neural networks (HGNNs) often face challenges in efficiently integrating information from multiple views, which hinders their ability to fully leverage complex data structures. To overcome this problem, we present an improved graph-level cross-attention mechanism specifically designed to enhance multi-view integration and improve the model's expressiveness in heterogeneous networks. By incorporating random walks, the Katz index, and Transformers, the model captures higher-order semantic relationships between nodes within the meta-path view. Node context information is extracted by decomposing the network and applying the attention mechanism within the network schema view. The improved graph-level cross-attention in the cross-view context adaptively fuses features from both views. Furthermore, a contrastive loss function is employed to select positive samples based on the local connection strength and global centrality of nodes, enhancing the model's robustness. The suggested self-supervised model performs exceptionally well in node classification and clustering tasks, according to experimental data, demonstrating the effectiveness of our method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107681"},"PeriodicalIF":6.0,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-26DOI: 10.1016/j.neunet.2025.107684
Chunling Fan, Yuebin Song, Xiaoqian Mao
{"title":"A classification method of motor imagery based on brain functional networks by fusing PLV and ECSP","authors":"Chunling Fan, Yuebin Song, Xiaoqian Mao","doi":"10.1016/j.neunet.2025.107684","DOIUrl":"10.1016/j.neunet.2025.107684","url":null,"abstract":"<div><div>In order to enhance the decoding ability of brain states and evaluate the functional connection changes of relevant nodes in brain regions during motor imagery (MI), this paper proposes a brain functional network construction method which fuses edge features and node features. And we use deep learning methods to realize MI classification of left and right hand grasping tasks. Firstly, we use phase locking value (PLV) to extract edge features and input a weighted PLV to enhanced common space pattern (ECSP) to extract node features. Then, we fuse edge features and node features to construct a novel brain functional network. Finally, we construct an attention and multi-scale feature convolutional neural network (AMSF-CNN) to validate our method. The performance indicators of the brain functional network on the SHU_Dataset in the corresponding brain region will increase and be higher than those in the contralateral brain region when imagining one hand grasping. The average accuracy of our method reaches 79.65 %, which has a 25.85 % improvement compared to the accuracy provided by SHU_Dataset. By comparing with other methods on SHU_Dataset and BCI IV 2a Dataset, the average accuracies achieved by our method outperform other references. Therefore, our method provides theoretical support for exploring the working mechanism of the human brain during MI.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107684"},"PeriodicalIF":6.0,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144185016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-05-25DOI: 10.1016/j.neunet.2025.107603
Minghui Liao, Guojia Wan, Wenbin Hu, Bo Du
{"title":"Building connectome analysis tools with representation learning on neuronal skeleton and circuit topology","authors":"Minghui Liao, Guojia Wan, Wenbin Hu, Bo Du","doi":"10.1016/j.neunet.2025.107603","DOIUrl":"10.1016/j.neunet.2025.107603","url":null,"abstract":"<div><div>Analyzing connectome plays a significant role in the investigation of neurological diseases and brain research. However, the efficiency of utilizing anatomical, physiological, or molecular characteristics of neurons is relatively low and costly. With the advancements in volume electron microscopy(VEM) and analysis techniques for brain tissue, we are able to obtain whole-brain connectome consisting neuronal high-resolution morphology and connectivity information. Nevertheless, few tools are built based on such data for automated connectome analysis. In this paper, we introduce a connectome analysis tool based on a representation learning model termed NeuNet. NeuNet consists of three key components: Connectome Encoder, Skeleton Encoder, and Readout Layer, which together integrate information pertaining to neuronal connectivity and morphology. Furthermore, we reprocess and release a brain neuron reconstruction dataset from a <em>Drosophila</em> Nerve Cord VEM data. We apply the proposed tool to tasks related to connectome analysis, including neuron classification, brain circuit layout, neuron retrieval and neuron morphology description, and the experiments demonstrate the effectiveness of our tool. We will soon release our code and data on <span><span>https://github.com/WHUminghui/ConnectomeAnalysisTool</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107603"},"PeriodicalIF":6.0,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}