Neural NetworksPub Date : 2025-06-11DOI: 10.1016/j.neunet.2025.107682
Riting Xia , Huibo Liu , Anchen Li , Xueyan Liu , Yan Zhang , Chunxu Zhang , Bo Yang
{"title":"Incomplete graph learning: A comprehensive survey","authors":"Riting Xia , Huibo Liu , Anchen Li , Xueyan Liu , Yan Zhang , Chunxu Zhang , Bo Yang","doi":"10.1016/j.neunet.2025.107682","DOIUrl":"10.1016/j.neunet.2025.107682","url":null,"abstract":"<div><div>Graph learning is a prevalent field that operates on ubiquitous graph data. Effective graph learning methods can extract valuable information from graphs. However, these methods are non-robust and affected by missing attributes in graphs, resulting in sub-optimal outcomes. This has led to the emergence of incomplete graph learning, which aims to process and learn from incomplete graphs to achieve more accurate and representative results. In this paper, we conducted a comprehensive review of the literature on incomplete graph learning. Initially, we categorize incomplete graphs and provide precise definitions of relevant concepts, terminologies, and techniques, thereby establishing a solid understanding for readers. Subsequently, we classify incomplete graph learning methods according to the types of incompleteness: (1) attribute-incomplete graph learning methods, (2) attribute-missing graph learning methods, and (3) hybrid-absent graph learning methods. By systematically classifying and summarizing incomplete graph learning methods, we highlight the commonalities and differences among existing approaches, aiding readers in selecting methods and laying the groundwork for further advancements. In addition, we summarize the datasets, incomplete processing modes, evaluation metrics, and application domains used by the current methods. Lastly, we discuss the current challenges and propose future directions for incomplete graph learning, with the aim of stimulating further innovations in this crucial field. To our knowledge, this is the first review dedicated to incomplete graph learning, aiming to offer valuable insights for researchers in related fields.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107682"},"PeriodicalIF":6.0,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144279297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-11DOI: 10.1016/j.neunet.2025.107687
Caiwei Yang , Yanping Chen , Shuai Yu , Bo Dong , Jiwei Qin
{"title":"Sharpening semantic gradient in a planarized sentence representation","authors":"Caiwei Yang , Yanping Chen , Shuai Yu , Bo Dong , Jiwei Qin","doi":"10.1016/j.neunet.2025.107687","DOIUrl":"10.1016/j.neunet.2025.107687","url":null,"abstract":"<div><div>Mapping a sentence into a two-dimensional representation has the advantage of unfolding nested semantic structures in a sentence and encoding the interaction between tokens. In the planarized sentence representation, neighboring elements denote overlapped linguistic units in a sentence. An important phenomenon is that the semantic information of a true linguistic unit may penetrate neighboring elements, which blurs the semantic edge of a linguistic unit and disturbs the planarized sentence representation. Therefore, sharpening the semantic gradient helps aggravate semantic information from neighborhoods and depressing noises in neighboring elements. This paper reveals the mechanism of sharpening semantic gradient in the planarized sentence representation. Our method is evaluated on six evaluation datasets. The results show impressive improvement on three information extraction tasks. The success indicates that representing and processing sentences in a two-dimensional representation has a great potential to decode the sentential semantic structure and support sentence-level information extraction. Our code to implement the model is available at: <span><span>https://github.com/caiwyang/Semantic_Gradient</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107687"},"PeriodicalIF":6.0,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144290819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107685
Xin Wu , Lianming Wang , Jipeng Huang
{"title":"AnimalRTPose: Faster cross-species real-time animal pose estimation","authors":"Xin Wu , Lianming Wang , Jipeng Huang","doi":"10.1016/j.neunet.2025.107685","DOIUrl":"10.1016/j.neunet.2025.107685","url":null,"abstract":"<div><div>Recent advancements in computer vision have facilitated the development of sophisticated tools for analyzing complex animal behaviors, yet the diversity of animal morphology and environmental complexities present significant challenges to real-time animal pose estimation. To address these challenges, we introduce AnimalRTPose, a one-stage model designed for cross-species real-time animal pose estimation. At its core, AnimalRTPose leverages CSPNeXt<span><math><msup><mrow></mrow><mrow><mi>†</mi></mrow></msup></math></span>, a novel backbone network that integrates depthwise separable convolution with skip connections for high-frequency feature extraction, a channel attention mechanism (CAM) to enhance the fusion of high-frequency and low-frequency features, and spatial pyramid pooling (SPP) to capture multi-scale contextual information. This architecture enables robust feature representation across varying spatial resolutions, enhancing adaptability to diverse species and environments. Additionally, AnimalRTPose incorporates an efficient multi-scale feature fusion module that dynamically balances local detail and global structural consistency, ensuring high accuracy and robustness in pose estimation. Designed for scalability and versatility, AnimalRTPose supports single-animal, multi-animal, cross-species, and few-shot scenarios. Specifically, AnimalRTPose-N achieves 476 FPS on NVIDIA RTX 2080Ti, 769 FPS on NVIDIA RTX 3090, and 1111 FPS on NVIDIA A800, while demonstrating high throughput on edge devices with 196 FPS on the NVIDIA Jetson™ AGX Orin Developer Kit (275 TOPS, 15 W to 60 W), 77 FPS on the Raspberry Pi 5 with AI HAT+ (26 TOPS, 25 W), and 64 FPS on the Atlas 200I Developer Kit A2 (8 TOPS, 24 W), all with a 640 × 640 input resolution. These results surpass all existing one-stage models, showcasing its superior performance in real-time animal pose estimation. AnimalRTPose is thus highly applicable for scenarios requiring real-time animal behavior monitoring. Further details on the model configuration and dataset are available on the <span><span>AnimalRTPose</span><svg><path></path></svg></span> project website.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107685"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144271957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107616
Leila Mahmoodi , Peyman Moghadam , Munawar Hayat , Christian Simon , Mehrtash Harandi
{"title":"Flashbacks to harmonize stability and plasticity in continual learning","authors":"Leila Mahmoodi , Peyman Moghadam , Munawar Hayat , Christian Simon , Mehrtash Harandi","doi":"10.1016/j.neunet.2025.107616","DOIUrl":"10.1016/j.neunet.2025.107616","url":null,"abstract":"<div><div>We introduce Flashback Learning (FL), a novel method designed to harmonize the stability and plasticity of models in Continual Learning (CL). Unlike prior approaches that primarily focus on regularizing model updates to preserve old information while learning new concepts, FL explicitly balances this trade-off through a <em>bidirectional</em> form of regularization. This approach effectively guides the model to swiftly incorporate new knowledge while actively retaining its old knowledge. FL operates through a <em>two-phase</em> training process and can be seamlessly integrated into various CL methods, including replay, parameter regularization, distillation, and dynamic architecture techniques. In designing FL, we use two distinct <em>knowledge bases</em>: one to enhance plasticity and another to improve stability. FL ensures a more balanced model by utilizing both knowledge bases to regularize model updates. Theoretically, we analyze how the FL mechanism enhances the stability–plasticity balance. Empirically, FL demonstrates tangible improvements over baseline methods within the same training budget. By integrating FL into at least one representative baseline from each CL category, we observed an average accuracy improvement of up to 4.91% in Class-Incremental and 3.51% in Task-Incremental settings on standard image classification benchmarks. Additionally, measurements of the stability-to-plasticity ratio confirm that FL effectively enhances this balance. FL also outperforms state-of-the-art CL methods on more challenging datasets like ImageNet. The codes of this article will be available at <span><span>https://github.com/csiro-robotics/Flashback-Learning</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107616"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107762
Suhee Cho , Hyeonsu Lee , Seungdae Baek , Se-Bum Paik
{"title":"Neuromimetic metaplasticity for adaptive continual learning without catastrophic forgetting","authors":"Suhee Cho , Hyeonsu Lee , Seungdae Baek , Se-Bum Paik","doi":"10.1016/j.neunet.2025.107762","DOIUrl":"10.1016/j.neunet.2025.107762","url":null,"abstract":"<div><div>Conventional intelligent systems based on deep neural network (DNN) models encounter challenges in achieving human-like continual learning due to catastrophic forgetting. Here, we propose a metaplasticity model inspired by human working memory, enabling DNNs to perform catastrophic forgetting-resistant continual learning without any pre- or post-processing. A key aspect of our approach involves implementing distinct types of synapses from stable to flexible, and randomly intermixing them to train synaptic connections with different degrees of flexibility. This strategy allowed the network to successfully learn a continuous stream of information, even under unexpected changes in input length. The model achieved a balanced tradeoff between memory capacity and performance without requiring additional training or structural modifications, dynamically allocating memory resources to retain both old and new information. Furthermore, the model demonstrated robustness against data poisoning attacks by selectively filtering out erroneous memories, leveraging the Hebb repetition effect to reinforce the retention of significant data.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107762"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107670
Mingli Wang , Junbin Gao , Xinwei Jiang , Chunlong Hu , Qi Feng , Tianjiang Wang
{"title":"Rescaled three-mode principal component analysis: An approach to subspace recovery","authors":"Mingli Wang , Junbin Gao , Xinwei Jiang , Chunlong Hu , Qi Feng , Tianjiang Wang","doi":"10.1016/j.neunet.2025.107670","DOIUrl":"10.1016/j.neunet.2025.107670","url":null,"abstract":"<div><div>Many tasks, such as image denoising, can be framed within the context of subspace recovery. For its algorithm design, robustness is a critical consideration. In this paper, we propose a novel holistic approach to robust subspace recovery. The fundamental work consists of extending Stein’s unbiased risk estimate to elliptical densities, expanding Gaussian scale mixtures, and estimating error density from the dataset. These advancements serve as the foundation for a rescaled three-mode principal component analysis. By leveraging the majorization–minimization (MM) algorithm, we seamlessly integrate total variation into our model. A key feature of this approach is its inherent robustness to outliers, as demonstrated through our experimental results.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107670"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107686
Zhixian Wang , Tao Zhang , Wu Huang
{"title":"Multimodal learning rebalanced: Negative correlation ensembles for improved performance","authors":"Zhixian Wang , Tao Zhang , Wu Huang","doi":"10.1016/j.neunet.2025.107686","DOIUrl":"10.1016/j.neunet.2025.107686","url":null,"abstract":"<div><div>Multimodal learning aims to integrate information from different modalities to overcome the limitations of single-modal information. Recent research has shown that multimodal learning methods often focus on optimizing a dominant modality, leading to incomplete model performance development, and sometimes even inferior to single-modal models, a phenomenon referred to as the modality imbalance problem. To overcome this issue, some studies adaptively adjust the gradients or loss functions based on the design of identifying the dominant modality. However, while enhancing the convergence capability of the non-dominant modalities, they often result in a decreased ability to utilize the information from the dominant modality. Therefore, we treat each modality in our model as a basic classifier and address the modality imbalance problem from the perspective of ensemble learning, thus fully leveraging the information from each modality. In addition, we introduce the concept of negative correlation learning to ensure the diversity of information encoding across different modalities. Through experiments carried out across multiple datasets, using various late fusion techniques, and across a variety of tasks, we validated the superior performance of the proposed method, as evidenced by significant improvements in accuracy compared to existing approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107686"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107683
Zheng Guo , Zirui Zhang , Wei Yan , Zhixiang Wu , Zhenhua Xu , Huasong Chen , Yunjing Ji , Chunyong Wang , Jiancheng Lai , Zhenhua Li
{"title":"Gradient domain model-driven algorithm unfolding network for blind image deblurring","authors":"Zheng Guo , Zirui Zhang , Wei Yan , Zhixiang Wu , Zhenhua Xu , Huasong Chen , Yunjing Ji , Chunyong Wang , Jiancheng Lai , Zhenhua Li","doi":"10.1016/j.neunet.2025.107683","DOIUrl":"10.1016/j.neunet.2025.107683","url":null,"abstract":"<div><div>Blind image deblurring remains a challenging ill-posed problem due to the simultaneous estimation of clear images and blur kernels. Recently, image deblurring methods that utilize algorithm unfolding techniques have made significant advancements. However, the classical image gradient prior, despite its effectiveness, remains an unexplored avenue in deep unfolding frameworks. In this paper, to further exploit the role of image gradients in deblurring tasks, we introduce a gradient-driven algorithm unfolding network (GDUNet) by generalizing the classical sparse gradient deblurring model. We design specific proximal mapping modules for various prior terms to flexibly learn more accurate prior distributions. The entire framework is structured around alternating updates of images and kernels, naturally embedding the convolutional paradigm of blurred images. Additionally, we introduce a blur pattern attention module (BPAM) designed to modulate the finest-scale image features and facilitate the restoration of the blur kernel. Experimental results on multiple color-blurred image datasets indicate that our GDUNet achieves superior performance compared to state-of-the-art methods. The code is available at <span><span>https://github.com/Redamancy0222/GDUNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107683"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-10DOI: 10.1016/j.neunet.2025.107761
Zhen Jiang, Zeyu Feng, Bolin Niu
{"title":"Prototype-Neighbor Networks with task-specific enhanced meta-learning for few-shot classification","authors":"Zhen Jiang, Zeyu Feng, Bolin Niu","doi":"10.1016/j.neunet.2025.107761","DOIUrl":"10.1016/j.neunet.2025.107761","url":null,"abstract":"<div><div>As a promising technique for Few-Shot Classification (FSC), Prototypical Networks (PN) has gained increasing attention due to their simplicity and effectiveness. However, the unimodal prototypes derived from a few labeled data may lack representativeness and fail to capture complex data distributions. Inspired by KNN, a model-free classification algorithm, we propose a Neighbor Network (NN) to compensate for the limitations of PN. Specifically, NN classifies query samples based on their neighbors and optimizes the metric space to ensure that samples of the same class are grouped together. By combining PN and NN, we propose a Prototype-Neighbor Networks (PNN) to learn a better metric space where a few labeled data suffice to learn a reliable FSC model. To enhance adaptability to new classes, we improve the meta-learning mechanism by incorporating a task-specific fine-tuning phase between the meta-training and meta-testing stages. Additionally, we present a data augmentation method that combines PN and NN to generate pseudo-labeled data. Compared to self-training approaches, our method significantly reduces pseudo-label noise and confirmation bias. The proposed method has been validated on three benchmark datasets. Compared to 24 state-of-the-art FSC algorithms, PNN outperforms others on mini-imageNet, and CUB while achieving competitive results on tiered-imageNet. The experimental results on four medical image datasets further demonstrate the effectiveness of PNN on cross-domain tasks. The source code and related models are available at <span><span>https://github.com/Dracula-funny/PNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107761"},"PeriodicalIF":6.0,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-09DOI: 10.1016/j.neunet.2025.107653
Xuebin Wang , Ruixue Qin , Ke Zhang , Zengru Di , Qiang Liu , He Liu
{"title":"Prediction of functional neural circuits in caenorhabditis elegans based on overlapping community detection","authors":"Xuebin Wang , Ruixue Qin , Ke Zhang , Zengru Di , Qiang Liu , He Liu","doi":"10.1016/j.neunet.2025.107653","DOIUrl":"10.1016/j.neunet.2025.107653","url":null,"abstract":"<div><div>The identification of functional neural circuits is crucial for understanding brain functions. However, experimental methods are often labor-intensive and resource-intensive. In this study, we modified the BIGCLAM algorithm to detect overlapping communities in directed and weighted networks and applied it to the neural networks of hermaphrodite and male Caenorhabditis elegans (C. elegans). Given the high similarity in connotation between network communities and functional neural circuits, we can predict functional neural circuits by detecting communities within the neural networks, thereby reducing the complexity of experimental research. In hermaphrodites, we predicted functional neural circuits related to various behaviors, including egg-laying, pharyngeal regulation, stress-induced sleep, tail sensation, and mechanosensation. In males, we identified functional neural circuits involved in sex-specific behaviors, such as mating and mate-searching, as well as those related to mechanosensation and food representation. These findings provide new insights into the neural mechanisms underlying behaviors and sexual dimorphism in C. elegans. The modified algorithm also has potential applications in analyzing other complex systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107653"},"PeriodicalIF":6.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}