Neural NetworksPub Date : 2025-03-25DOI: 10.1016/j.neunet.2025.107381
Yanan Cao , Fengzhao Shi , Qing Yu , Xixun Lin , Chuan Zhou , Lixin Zou , Peng Zhang , Zhao Li , Dawei Yin
{"title":"IBPL: Information Bottleneck-based Prompt Learning for graph out-of-distribution detection","authors":"Yanan Cao , Fengzhao Shi , Qing Yu , Xixun Lin , Chuan Zhou , Lixin Zou , Peng Zhang , Zhao Li , Dawei Yin","doi":"10.1016/j.neunet.2025.107381","DOIUrl":"10.1016/j.neunet.2025.107381","url":null,"abstract":"<div><div>When training and test graph samples follow different data distributions, graph out-of-distribution (OOD) detection becomes an indispensable component of constructing the reliable and safe graph learning systems. Motivated by the significant progress on prompt learning, graph prompt-based methods, which enable a well-trained graph neural network to detect OOD graphs without modifying any model parameters, have been a standard benchmark with promising computational efficiency and model effectiveness. However, these methods ignore the influence of overlapping features existed in both in-distribution (ID) and OOD graphs, which weakens the difference between them and leads to sub-optimal detection results. In this paper, we present the <strong>I</strong>nformation <strong>B</strong>ottleneck-based <strong>P</strong>rompt <strong>L</strong>earning (IBPL) to overcome this challenging problem. Specifically, IBPL includes a new graph prompt that jointly performs the mask operation on node features and the graph structure. Building upon this, we develop an information bottleneck (IB)-based objective to optimize the proposed graph prompt. Since the overlapping features are inaccessible, IBPL introduces the noise data augmentation which generates a series of perturbed graphs to fully covering the overlapping features. Through minimizing the mutual information between the prompt graph and the perturbed graphs, our objective can eliminate the overlapping features effectively. In order to avoid the negative impact of perturbed graphs, IBPL simultaneously maximizes the mutual information between the prompt graph and the category label for better extracting the ID features. We conduct experiments on multiple real-world datasets in both supervised and unsupervised scenarios. The empirical results and extensive model analyses demonstrate the superior performance of IBPL over several competitive baselines.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107381"},"PeriodicalIF":6.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143714778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-24DOI: 10.1016/j.neunet.2025.107412
Yiwei Li , Jiaxin Liu , Lei Jia , Liangze Yin , Xingpei Li , Yong Zhang
{"title":"Noise-resistant predefined-time convergent ZNN models for dynamic least squares and multi-agent systems","authors":"Yiwei Li , Jiaxin Liu , Lei Jia , Liangze Yin , Xingpei Li , Yong Zhang","doi":"10.1016/j.neunet.2025.107412","DOIUrl":"10.1016/j.neunet.2025.107412","url":null,"abstract":"<div><div>Zeroing neural networks (ZNNs) are commonly used for dynamic matrix equations, but their performance under numerically unstable conditions has not been thoroughly explored, especially in situations involving unequal row-column matrices. The challenge is further aggravated by noise, particularly in dynamic least squares (DLS) problems. To address these issues, we propose the QR decomposition-driven noise-resistant ZNN (QRDN-ZNN) model, specifically designed for DLS problems. By integrating QR decomposition into the ZNN framework, QRDN-ZNN enhances numerical stability and guarantees both precise and rapid convergence through a novel activation function (N-Af). As validated by theoretical analysis and experiments, the model can effectively counter disturbances and enhance solution accuracy in dynamic environments. Experimental results show that, in terms of noise resistance, the QRDN-ZNN model outperforms existing mainstream ZNN models, including the original ZNN, integral-enhanced ZNN, double-integral enhanced ZNN, and super-twisting ZNN. Furthermore, the N-Af offers higher accuracy and faster convergence than other state-of-the-art activation functions. To demonstrate the practical utility of the method, We develop a new noise-resistant consensus protocol inspired by QRDN-ZNN, which enables multi-agent systems to reach consensus even in noisy conditions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107412"},"PeriodicalIF":6.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143696832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-24DOI: 10.1016/j.neunet.2025.107420
Xuandi Luo , Huaidong Zhang , Yi Xie , Hongrui Zhang , Xuemiao Xu , Shengfeng He
{"title":"L3Net: Localized and Layered Reparameterization for incremental learning","authors":"Xuandi Luo , Huaidong Zhang , Yi Xie , Hongrui Zhang , Xuemiao Xu , Shengfeng He","doi":"10.1016/j.neunet.2025.107420","DOIUrl":"10.1016/j.neunet.2025.107420","url":null,"abstract":"<div><div>Model-based class incremental learning (CIL) methods aim to address the challenge of catastrophic forgetting by retaining certain parameters and expanding the model architecture. However, retaining too many parameters can lead to an overly complex model, increasing inference overhead. Additionally, compressing these parameters to reduce the model size can result in performance degradation. To tackle these challenges, we propose a novel three-stage CIL framework called <strong>L</strong>ocalized and <strong>L</strong>ayered Reparameterization for Incremental <strong>L</strong>earning (<strong>L<sup>3</sup>Net</strong>). The rationale behind our approach is to balance model complexity and performance by selectively expanding and optimizing critical components. Specifically, the framework introduces a Localized Dual-path Expansion structure, which allows the model to learn simultaneously from both old and new features by integrating a fusion selector after each convolutional layer. To further minimize potential conflicts between old and new features, we implement the Feature Selectors Gradient Resetting method, which sparsifies the fusion selectors and reduces the influence of redundant old features. Additionally, to address classification bias resulting from class imbalance, we design the Decoupled Balanced Distillation technique and apply Logit Adjustment to more effectively retain knowledge from the rehearsal set. Extensive experiments demonstrate that our <strong>L<sup>3</sup>Net</strong> framework outperforms state-of-the-art methods on widely used benchmarks, including CIFAR-100 and ImageNet-100/1000.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107420"},"PeriodicalIF":6.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-24DOI: 10.1016/j.neunet.2025.107417
Wanqi Zhou , Shuanghao Bai , Yuqing Xie , Yicong He , Qibin Zhao , Badong Chen
{"title":"An information-theoretic approach for heterogeneous differentiable causal discovery","authors":"Wanqi Zhou , Shuanghao Bai , Yuqing Xie , Yicong He , Qibin Zhao , Badong Chen","doi":"10.1016/j.neunet.2025.107417","DOIUrl":"10.1016/j.neunet.2025.107417","url":null,"abstract":"<div><div>With the advancement of deep learning, a variety of differential causal discovery methods have emerged, inevitably attracting more attention for their excellent scalability and interpretability. However, these methods often struggle with complex heterogeneous datasets that exhibit environmental diversity and are characterized by shifts in noise distribution. To this end, we introduce a novel information-theoretic approach designed to enhance the robustness of differential causal discovery methods. Specifically, we integrate Minimum Error Entropy (MEE) as an adaptive error regulator into the structure learning framework. MEE effectively reduces error variability across diverse samples, enabling our model to adapt dynamically to varying levels of complexity and noise. This adjustment significantly improves the precision and stability of the model. Extensive experiments on both synthetic and real-world datasets have demonstrated significant performance enhancements over existing methods, affirming the effectiveness of our approach. The code is available at <span><span>https://github.com/ElleZWQ/MHCD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107417"},"PeriodicalIF":6.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143735174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-23DOI: 10.1016/j.neunet.2025.107410
Danny Smyl , Tyler N. Tallman , Laura Homa , Chenoa Flournoy , Sarah J. Hamilton , John Wertz
{"title":"Physics Informed Neural Networks for Electrical Impedance Tomography","authors":"Danny Smyl , Tyler N. Tallman , Laura Homa , Chenoa Flournoy , Sarah J. Hamilton , John Wertz","doi":"10.1016/j.neunet.2025.107410","DOIUrl":"10.1016/j.neunet.2025.107410","url":null,"abstract":"<div><div>Electrical Impedance Tomography (EIT) is an imaging modality used to reconstruct the internal conductivity distribution of a domain via boundary voltage measurements. In this paper, we present a novel EIT approach for integrated sensing of composite structures utilizing Physics Informed Neural Networks (PINNs). Unlike traditional data-driven only models, PINNs incorporate underlying physical principles governing EIT directly into the learning process, enabling precise and rapid reconstructions. We demonstrate the effectiveness of PINNs with a variety of physical constraints for integrated sensing. The proposed approach has potential to enhance material characterization and condition monitoring, offering a robust alternative to classical EIT approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107410"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143737798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-23DOI: 10.1016/j.neunet.2025.107431
Zhiyong Zhou , Zhechen Zhou , Xusheng Qian , Jisu Hu , Bo Peng , Chen Geng , Bin Dai , He Huang , Wenbin Zhang , Yakang Dai
{"title":"BSA-Seg: A Bi-level sparse attention network combining narrow band loss for multi-target medical image segmentation","authors":"Zhiyong Zhou , Zhechen Zhou , Xusheng Qian , Jisu Hu , Bo Peng , Chen Geng , Bin Dai , He Huang , Wenbin Zhang , Yakang Dai","doi":"10.1016/j.neunet.2025.107431","DOIUrl":"10.1016/j.neunet.2025.107431","url":null,"abstract":"<div><div>Segmentation of multiple targets of varying sizes within medical images is of significant importance for the diagnosis of disease and pathological research. Transformer-based methods are emerging in the medical image segmentation, leveraging the powerful yet computationally intensive self-attention mechanism. A variety of attention mechanisms have been proposed to reduce computation at the cost of accuracy loss, utilizing handcrafted patterns within local or artificially defined receptive fields. Furthermore, the common region-based loss functions are insufficient for guiding the transformer to focus on tissue regions, resulting in their unsuitability for the segmentation of tissues with intricate boundaries. This paper presents the development of a bi-level sparse attention network and a narrow band (NB) loss function for the accurate and efficient multi-target segmentation of medical images. In particular, we introduce a bi-level sparse attention module (BSAM) and formulate a segmentation network based on this module. The BSAM consists of coarse-grained patch-level attention and fine-grained pixel-level attention, which captures fine-grained contextual features in adaptive receptive fields learned by patch-level attention. This results in enhanced segmentation accuracy while simultaneously reducing computational complexity. The proposed narrow-band (NB) loss function constructs a target region in close proximity to the tissue boundary. The network is thus guided to perform boundary-aware segmentation, thereby simultaneously alleviating the issues of over-segmentation and under-segmentation. A series of comprehensive experiments on whole brains, brain tumors and abdominal organs, demonstrate that our method outperforms other state-of-the-art segmentation methods. Furthermore, the BSAM and NB loss can be applied flexibly to a variety of network frameworks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107431"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107400
Lei Chen , Xinzhe Cao , Tingqin He , Yepeng Xu , Xuxin Liu , Bowen hu
{"title":"A lightweight All-MLP time–frequency anomaly detection for IIoT time series","authors":"Lei Chen , Xinzhe Cao , Tingqin He , Yepeng Xu , Xuxin Liu , Bowen hu","doi":"10.1016/j.neunet.2025.107400","DOIUrl":"10.1016/j.neunet.2025.107400","url":null,"abstract":"<div><div>Anomaly detection in the Industrial Internet of Things (IIoT) aims at identifying abnormal sensor signals to ensure industrial production safety. However, most existing models only focus on high accuracy by building a bulky neural network with deep structures and huge parameters. In this case, these models usually exhibit poor timeliness and high resource consumption, which makes these models unsuitable for resource-limited edge industrial scenarios. To solve this problem, a lightweight All-MLP time–frequency anomaly detection model is proposed for IIoT time series, namely LTFAD. <em>Firstly</em>, unlike traditional deep and bulky solutions, a shallow and lightweight All-MLP architecture is designed to achieve high timeliness and low resource consumption. <em>Secondly</em>, based on the lightweight architecture, a dual-branch network is constructed to improve model accuracy by simultaneously learning “global to local” and “local to global” reconstruction. <em>Finally</em>, time–frequency joint learning is employed in each reconstruction branch to further enhance accuracy. To the best of our knowledge, this is the first work to develop a time–frequency anomaly detection model based only on the shallow All-MLP architecture. Extensive experiments demonstrate that LTFAD can quickly and accurately identify anomalies on resource-limited edge devices, such as the Raspberry Pi 4b and Jetson Xavier NX. The source code for LTFAD is available at <span><span>https://github.com/infogroup502/LTFAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107400"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107397
Alan John Varghese , Zhen Zhang , George Em Karniadakis
{"title":"SympGNNs: Symplectic Graph Neural Networks for identifying high-dimensional Hamiltonian systems and node classification","authors":"Alan John Varghese , Zhen Zhang , George Em Karniadakis","doi":"10.1016/j.neunet.2025.107397","DOIUrl":"10.1016/j.neunet.2025.107397","url":null,"abstract":"<div><div>Existing neural network models to learn Hamiltonian systems, such as SympNets, although accurate in low-dimensions, struggle to learn the correct dynamics for high-dimensional many-body systems. Herein, we introduce Symplectic Graph Neural Networks (SympGNNs) that can effectively handle system identification in high-dimensional Hamiltonian systems, as well as node classification. SympGNNs combine symplectic maps with permutation equivariance, a property of graph neural networks. Specifically, we propose two variants of SympGNNs: (i) G-SympGNN and (ii) LA-SympGNN, arising from different parameterizations of the kinetic and potential energy. We demonstrate the capabilities of SympGNN on two physical examples: a 40-particle coupled Harmonic oscillator, and a 2000-particle molecular dynamics simulation in a two-dimensional Lennard-Jones potential. Furthermore, we demonstrate the performance of SympGNN in the node classification task, achieving accuracy comparable to the state-of-the-art. We also empirically show that SympGNN can overcome the oversmoothing and heterophily problems, two key challenges in the field of graph neural networks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107397"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107407
Tairan Huang , Qiutong Li , Cong Xu , Jianliang Gao , Zhao Li , Shichao Zhang
{"title":"Revisiting low-homophily for graph-based fraud detection","authors":"Tairan Huang , Qiutong Li , Cong Xu , Jianliang Gao , Zhao Li , Shichao Zhang","doi":"10.1016/j.neunet.2025.107407","DOIUrl":"10.1016/j.neunet.2025.107407","url":null,"abstract":"<div><div>The openness of Internet stimulates a large number of fraud behaviors which have become a huge threat. Graph-based fraud detectors have attracted extensive interest since the abundant structure information of graph data has proved effective. Conventional Graph Neural Network (GNN) approaches reveal fraudsters based on the homophily assumption. But fraudsters typically generate heterophilous connections and label-imbalanced neighborhood. Such behaviors deteriorate the performance of GNNs in fraud detection tasks due to the low homophily in graphs. Though some recent works have noticed the challenges, they either treat the heterophilous connections as homophilous ones or tend to reduce heterophily, which roughly ignore the benefits from heterophily. In this work, an integrated two-strategy framework HeteGAD is proposed to balance both homophily and heterophily information from neighbors. The key lies in explicitly shrinking intra-class distance and increasing inter-class segregation. Specifically, the Heterophily-aware Aggregation Strategy tease out the feature disparity on heterophilous neighbors and augment the disparity between representations with different labels. And the Homophily-aware Aggregation Strategy are devised to capture the homophilous information in global text and augment the representation similarity with the same label. Finally, two corresponding inter-relational attention mechanisms are incorporated to refine the procedure of modeling the interaction of multiple relations. Experiments are conducted to evaluate the proposed method with two real-world datasets, and demonstrate that the HeteGAD outperforms 11 state-of-the-art baselines for fraud detection.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107407"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107409
Yongyan Guo, Gang Wu
{"title":"Restarted multiple kernel algorithms with self-guiding for large-scale multi-view clustering","authors":"Yongyan Guo, Gang Wu","doi":"10.1016/j.neunet.2025.107409","DOIUrl":"10.1016/j.neunet.2025.107409","url":null,"abstract":"<div><div>Multi-view clustering is a powerful approach for discovering underlying structures hidden behind diverse views of datasets. Most existing multi-view spectral clustering methods use fixed similarity matrices or alternately updated ones. However, the former often fall short in adaptively capturing relationships among different views, while the latter are often time-consuming and even impractical for large-scale datasets. To the best of our knowledge, there are no multi-view spectral clustering methods can both construct multi-view similarity matrices inexpensively and preserve the valuable clustering insights from previous cycles at the same time. To fill in this gap, we present a Sum-Ratio Multi-view Ncut model that share a common representation embedding for multi-view data. Based on this model, we propose a restarted multi-view multiple kernel clustering framework with self-guiding. To release the overhead, we use similarity matrices with strict block diagonal representation, and present an efficient multiple kernel selection technique. Comprehensive experiments on benchmark multi-view datasets demonstrate that, even using randomly generated initial guesses, the restarted algorithms can improve the clustering performances by 5–10 times for some popular multi-view clustering methods. Specifically, our framework offers a potential boosting effect for most of the state-of-the-art multi-view clustering algorithms at very little cost, especially for those with poor performances.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107409"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}