Neural NetworksPub Date : 2025-03-23DOI: 10.1016/j.neunet.2025.107410
Danny Smyl , Tyler N. Tallman , Laura Homa , Chenoa Flournoy , Sarah J. Hamilton , John Wertz
{"title":"Physics Informed Neural Networks for Electrical Impedance Tomography","authors":"Danny Smyl , Tyler N. Tallman , Laura Homa , Chenoa Flournoy , Sarah J. Hamilton , John Wertz","doi":"10.1016/j.neunet.2025.107410","DOIUrl":"10.1016/j.neunet.2025.107410","url":null,"abstract":"<div><div>Electrical Impedance Tomography (EIT) is an imaging modality used to reconstruct the internal conductivity distribution of a domain via boundary voltage measurements. In this paper, we present a novel EIT approach for integrated sensing of composite structures utilizing Physics Informed Neural Networks (PINNs). Unlike traditional data-driven only models, PINNs incorporate underlying physical principles governing EIT directly into the learning process, enabling precise and rapid reconstructions. We demonstrate the effectiveness of PINNs with a variety of physical constraints for integrated sensing. The proposed approach has potential to enhance material characterization and condition monitoring, offering a robust alternative to classical EIT approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107410"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143737798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-23DOI: 10.1016/j.neunet.2025.107431
Zhiyong Zhou , Zhechen Zhou , Xusheng Qian , Jisu Hu , Bo Peng , Chen Geng , Bin Dai , He Huang , Wenbin Zhang , Yakang Dai
{"title":"BSA-Seg: A Bi-level sparse attention network combining narrow band loss for multi-target medical image segmentation","authors":"Zhiyong Zhou , Zhechen Zhou , Xusheng Qian , Jisu Hu , Bo Peng , Chen Geng , Bin Dai , He Huang , Wenbin Zhang , Yakang Dai","doi":"10.1016/j.neunet.2025.107431","DOIUrl":"10.1016/j.neunet.2025.107431","url":null,"abstract":"<div><div>Segmentation of multiple targets of varying sizes within medical images is of significant importance for the diagnosis of disease and pathological research. Transformer-based methods are emerging in the medical image segmentation, leveraging the powerful yet computationally intensive self-attention mechanism. A variety of attention mechanisms have been proposed to reduce computation at the cost of accuracy loss, utilizing handcrafted patterns within local or artificially defined receptive fields. Furthermore, the common region-based loss functions are insufficient for guiding the transformer to focus on tissue regions, resulting in their unsuitability for the segmentation of tissues with intricate boundaries. This paper presents the development of a bi-level sparse attention network and a narrow band (NB) loss function for the accurate and efficient multi-target segmentation of medical images. In particular, we introduce a bi-level sparse attention module (BSAM) and formulate a segmentation network based on this module. The BSAM consists of coarse-grained patch-level attention and fine-grained pixel-level attention, which captures fine-grained contextual features in adaptive receptive fields learned by patch-level attention. This results in enhanced segmentation accuracy while simultaneously reducing computational complexity. The proposed narrow-band (NB) loss function constructs a target region in close proximity to the tissue boundary. The network is thus guided to perform boundary-aware segmentation, thereby simultaneously alleviating the issues of over-segmentation and under-segmentation. A series of comprehensive experiments on whole brains, brain tumors and abdominal organs, demonstrate that our method outperforms other state-of-the-art segmentation methods. Furthermore, the BSAM and NB loss can be applied flexibly to a variety of network frameworks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107431"},"PeriodicalIF":6.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107400
Lei Chen , Xinzhe Cao , Tingqin He , Yepeng Xu , Xuxin Liu , Bowen hu
{"title":"A lightweight All-MLP time–frequency anomaly detection for IIoT time series","authors":"Lei Chen , Xinzhe Cao , Tingqin He , Yepeng Xu , Xuxin Liu , Bowen hu","doi":"10.1016/j.neunet.2025.107400","DOIUrl":"10.1016/j.neunet.2025.107400","url":null,"abstract":"<div><div>Anomaly detection in the Industrial Internet of Things (IIoT) aims at identifying abnormal sensor signals to ensure industrial production safety. However, most existing models only focus on high accuracy by building a bulky neural network with deep structures and huge parameters. In this case, these models usually exhibit poor timeliness and high resource consumption, which makes these models unsuitable for resource-limited edge industrial scenarios. To solve this problem, a lightweight All-MLP time–frequency anomaly detection model is proposed for IIoT time series, namely LTFAD. <em>Firstly</em>, unlike traditional deep and bulky solutions, a shallow and lightweight All-MLP architecture is designed to achieve high timeliness and low resource consumption. <em>Secondly</em>, based on the lightweight architecture, a dual-branch network is constructed to improve model accuracy by simultaneously learning “global to local” and “local to global” reconstruction. <em>Finally</em>, time–frequency joint learning is employed in each reconstruction branch to further enhance accuracy. To the best of our knowledge, this is the first work to develop a time–frequency anomaly detection model based only on the shallow All-MLP architecture. Extensive experiments demonstrate that LTFAD can quickly and accurately identify anomalies on resource-limited edge devices, such as the Raspberry Pi 4b and Jetson Xavier NX. The source code for LTFAD is available at <span><span>https://github.com/infogroup502/LTFAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107400"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107397
Alan John Varghese , Zhen Zhang , George Em Karniadakis
{"title":"SympGNNs: Symplectic Graph Neural Networks for identifying high-dimensional Hamiltonian systems and node classification","authors":"Alan John Varghese , Zhen Zhang , George Em Karniadakis","doi":"10.1016/j.neunet.2025.107397","DOIUrl":"10.1016/j.neunet.2025.107397","url":null,"abstract":"<div><div>Existing neural network models to learn Hamiltonian systems, such as SympNets, although accurate in low-dimensions, struggle to learn the correct dynamics for high-dimensional many-body systems. Herein, we introduce Symplectic Graph Neural Networks (SympGNNs) that can effectively handle system identification in high-dimensional Hamiltonian systems, as well as node classification. SympGNNs combine symplectic maps with permutation equivariance, a property of graph neural networks. Specifically, we propose two variants of SympGNNs: (i) G-SympGNN and (ii) LA-SympGNN, arising from different parameterizations of the kinetic and potential energy. We demonstrate the capabilities of SympGNN on two physical examples: a 40-particle coupled Harmonic oscillator, and a 2000-particle molecular dynamics simulation in a two-dimensional Lennard-Jones potential. Furthermore, we demonstrate the performance of SympGNN in the node classification task, achieving accuracy comparable to the state-of-the-art. We also empirically show that SympGNN can overcome the oversmoothing and heterophily problems, two key challenges in the field of graph neural networks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107397"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107407
Tairan Huang , Qiutong Li , Cong Xu , Jianliang Gao , Zhao Li , Shichao Zhang
{"title":"Revisiting low-homophily for graph-based fraud detection","authors":"Tairan Huang , Qiutong Li , Cong Xu , Jianliang Gao , Zhao Li , Shichao Zhang","doi":"10.1016/j.neunet.2025.107407","DOIUrl":"10.1016/j.neunet.2025.107407","url":null,"abstract":"<div><div>The openness of Internet stimulates a large number of fraud behaviors which have become a huge threat. Graph-based fraud detectors have attracted extensive interest since the abundant structure information of graph data has proved effective. Conventional Graph Neural Network (GNN) approaches reveal fraudsters based on the homophily assumption. But fraudsters typically generate heterophilous connections and label-imbalanced neighborhood. Such behaviors deteriorate the performance of GNNs in fraud detection tasks due to the low homophily in graphs. Though some recent works have noticed the challenges, they either treat the heterophilous connections as homophilous ones or tend to reduce heterophily, which roughly ignore the benefits from heterophily. In this work, an integrated two-strategy framework HeteGAD is proposed to balance both homophily and heterophily information from neighbors. The key lies in explicitly shrinking intra-class distance and increasing inter-class segregation. Specifically, the Heterophily-aware Aggregation Strategy tease out the feature disparity on heterophilous neighbors and augment the disparity between representations with different labels. And the Homophily-aware Aggregation Strategy are devised to capture the homophilous information in global text and augment the representation similarity with the same label. Finally, two corresponding inter-relational attention mechanisms are incorporated to refine the procedure of modeling the interaction of multiple relations. Experiments are conducted to evaluate the proposed method with two real-world datasets, and demonstrate that the HeteGAD outperforms 11 state-of-the-art baselines for fraud detection.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107407"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-22DOI: 10.1016/j.neunet.2025.107409
Yongyan Guo, Gang Wu
{"title":"Restarted multiple kernel algorithms with self-guiding for large-scale multi-view clustering","authors":"Yongyan Guo, Gang Wu","doi":"10.1016/j.neunet.2025.107409","DOIUrl":"10.1016/j.neunet.2025.107409","url":null,"abstract":"<div><div>Multi-view clustering is a powerful approach for discovering underlying structures hidden behind diverse views of datasets. Most existing multi-view spectral clustering methods use fixed similarity matrices or alternately updated ones. However, the former often fall short in adaptively capturing relationships among different views, while the latter are often time-consuming and even impractical for large-scale datasets. To the best of our knowledge, there are no multi-view spectral clustering methods can both construct multi-view similarity matrices inexpensively and preserve the valuable clustering insights from previous cycles at the same time. To fill in this gap, we present a Sum-Ratio Multi-view Ncut model that share a common representation embedding for multi-view data. Based on this model, we propose a restarted multi-view multiple kernel clustering framework with self-guiding. To release the overhead, we use similarity matrices with strict block diagonal representation, and present an efficient multiple kernel selection technique. Comprehensive experiments on benchmark multi-view datasets demonstrate that, even using randomly generated initial guesses, the restarted algorithms can improve the clustering performances by 5–10 times for some popular multi-view clustering methods. Specifically, our framework offers a potential boosting effect for most of the state-of-the-art multi-view clustering algorithms at very little cost, especially for those with poor performances.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107409"},"PeriodicalIF":6.0,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-21DOI: 10.1016/j.neunet.2025.107406
Ronghua Lin , Chang Liu , Hao Zhong , Chengzhe Yuan , Guohua Chen , Yuncheng Jiang , Yong Tang
{"title":"Motif and supernode-enhanced gated graph neural networks for session-based recommendation","authors":"Ronghua Lin , Chang Liu , Hao Zhong , Chengzhe Yuan , Guohua Chen , Yuncheng Jiang , Yong Tang","doi":"10.1016/j.neunet.2025.107406","DOIUrl":"10.1016/j.neunet.2025.107406","url":null,"abstract":"<div><div>Session-based recommendation systems aim to predict users’ next interactions based on short-lived, anonymous sessions, a challenging yet vital task due to the sparsity and dynamic nature of user behavior. Existing Graph Neural Network (GNN)-based methods primarily focus on the session graphs while overlooking the influence of micro-structures and user behavior patterns. To address these limitations, we propose a Motif and Supernode-Enhanced Session-based Recommender System (MSERS), which constructs a global session graph, identifies and encodes motifs as supernodes, and reintegrates them into the global graph to enrich its topology and better represent item dependencies. By employing supernode-enhanced Gated Graph Neural Networks (GGNN), MSERS captures both long-term and latent item dependencies, significantly improving session representations. Extensive experiments on two real-world datasets demonstrate the superiority of MSERS over baseline methods, providing robust insights into the role of micro-structures in session-based recommendations.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107406"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-21DOI: 10.1016/j.neunet.2025.107403
Xuan Fan , Sijia Liu , Shuaiyan Liu , Lijun Zhao , Ruifeng Li
{"title":"AAPMatcher: Adaptive attention pruning matcher for accurate local feature matching","authors":"Xuan Fan , Sijia Liu , Shuaiyan Liu , Lijun Zhao , Ruifeng Li","doi":"10.1016/j.neunet.2025.107403","DOIUrl":"10.1016/j.neunet.2025.107403","url":null,"abstract":"<div><div>Local feature matching, which seeks to establish correspondences between two images, serves as a fundamental component in numerous computer vision applications, such as camera tracking and 3D mapping. Recently, Transformer has demonstrated remarkable capability in modeling accurate correspondences for the two input sequences owing to its long-range context integration capability. Whereas, indiscriminate modeling in traditional transformers inevitably introduces noise and includes irrelevant information which can degrade the quality of feature representations. Towards this end, we introduce an <em>adaptive attention pruning matcher for accurate local feature matching (AAPMatcher)</em>, which is designed for robust and accurate local feature matching. We overhaul the traditional uniform feature extraction for sequences by introducing the adaptive pruned transformer (APFormer), which adaptively retains the most profitable attention values for feature consolidation, enabling the network to obtain more useful feature information while filtering out useless information. Moreover, considering the fixed combination of self- and cross-APFormer greatly limits the flexibility of the network, we propose a two-stage <em>adaptive hybrid attention strategy (AHAS)</em>, which achieves the optimal combination for APFormers in a coarse to fine manner. Benefiting from the clean feature representations and the optimal combination of APFormers, AAPMatcher exceeds the state-of-the-art approaches over multiple benchmarks, including pose estimation, homography estimation, and visual localization.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107403"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-21DOI: 10.1016/j.neunet.2025.107393
Jincheng Huang , Xiaofeng Zhu
{"title":"Adaptive node-level weighted learning for directed graph neural network","authors":"Jincheng Huang , Xiaofeng Zhu","doi":"10.1016/j.neunet.2025.107393","DOIUrl":"10.1016/j.neunet.2025.107393","url":null,"abstract":"<div><div>Directed graph neural networks (DGNNs) have garnered increasing interest, yet few studies have focused on node-level representation in directed graphs. In this paper, we argue that different nodes rely on neighbor information from different directions. Furthermore, the commonly used mean aggregation for in-neighbor sets and out-neighbor sets may lose expressive power for certain nodes. To achieve this, first, we estimate the homophily of each node to neighbors in different directions by extending the Dirichlet energy. This approach allows us to assign larger weights to neighbors in directions exhibiting higher homophilic ratios for any node. Second, we introduce out-degree and in-degree information in the learning of weights to avoid the problem of weak expressive power ability of mean aggregation. Moreover, we theoretically demonstrate that our method enhances the expressive ability of directed graphs. Extensive experiments on seven real-world datasets demonstrate that our method outperforms state-of-the-art approaches in both node classification and link prediction tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107393"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-03-21DOI: 10.1016/j.neunet.2025.107402
Xueru Xu, Zhong Chen, Yuxin Hu, Guoyou Wang
{"title":"More signals matter to detection: Integrating language knowledge and frequency representations for boosting fine-grained aircraft recognition","authors":"Xueru Xu, Zhong Chen, Yuxin Hu, Guoyou Wang","doi":"10.1016/j.neunet.2025.107402","DOIUrl":"10.1016/j.neunet.2025.107402","url":null,"abstract":"<div><div>As object detection tasks progress rapidly, fine-grained detection flourishes as a promising extension. Fine-grained recognition naturally demands high-quality detail signals; however, existing fine-grained detectors, built upon the mainstream detection paradigm, struggle to simultaneously address the challenges of insufficient original signals and the loss of critical signals, resulting in inferior performance. We argue that language signals with advanced semantic knowledge can provide valuable information for fine-grained objects, as well as the frequency domain exhibits greater flexibility in suppressing and enhancing signals; then, we propose a fine-grained aircraft detector by integrating language knowledge and frequency representations into the one-stage detection paradigm. Concretely, by considering both original signals and deep feature signals, we develop three components, including an adaptive frequency augmentation branch (AFAB), a content-aware global features intensifier (CGFI), and a fine-grained text–image interactive feeder (FTIF), to facilitate perceiving and retaining critical signals throughout pivotal detection stages. The AFAB adaptively processes image patches according to their frequency characteristics in the Fourier domain, thus thoroughly mining critical visual content in the data space; the CGFI employs content-aware frequency filtering to enhance global features, allowing for generating an information-rich feature space; the FTIF introduces text knowledge to describe visual differences among fine-grained categories, conveying robust semantic priors from language signals to visual spaces via multimodal interaction for information supplement. Extensive experiments conducted on optical and SAR images demonstrate the superior performance of the proposed fine-grained detector, especially the FTIF, which can be plugged into most existing one-stage detectors to boost their fine-grained recognition performance significantly.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107402"},"PeriodicalIF":6.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}