Neural NetworksPub Date : 2025-06-14DOI: 10.1016/j.neunet.2025.107707
Jianming Zhang , Jing Yang , Yu Qin , Zhu Xiao , Jin Wang
{"title":"MGNet: RGBT tracking via cross-modality cross-region mutual guidance","authors":"Jianming Zhang , Jing Yang , Yu Qin , Zhu Xiao , Jin Wang","doi":"10.1016/j.neunet.2025.107707","DOIUrl":"10.1016/j.neunet.2025.107707","url":null,"abstract":"<div><div>Compared to single modal object tracking, the main challenge in RGBT tracking lies in effectively fusing features from both modalities. However, many existing methods neglect the dependence of distinct regions from different modalities, instead only considering that of identical regions, which fails to capture the cross-modal cross-regional relationships. In other words, they do not leverage the mutual guidance between different regions of different modalities. To address this limitation, we propose a novel RGBT tracking network, MGNet, which employs dual-stage attention and multi-scale feature fusion. The network includes the Cross-modality Cross-region Dual-stage Attention (CCDA) module and the Multi-scale Intra-region Feature Fusion (MIFF) module. The CCDA module processes features in two stages to preserve the unique features of identical region of different modalities, and then achieves mutual guidance across them. Specifically, in the first stage, features from different regions of different modalities are combined into a mixed representation, maintaining the distinct features of each region. In the second stage, attention mechanisms are applied to the mixed representation, facilitating cross-modality cross-region mutual guidance. Additionally, the MIFF module can perceive feature changes at multiple scales, ensuring effective fusion within each region. Our method achieves superior performance on three RGBT benchmark datasets (GTOT, RGBT234, and LasHeR) while running at 75 FPS, demonstrating both high accuracy and real-time performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107707"},"PeriodicalIF":6.0,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-14DOI: 10.1016/j.neunet.2025.107665
B.R. Zhao , D.K. Sun , H. Wu , C.J. Qin , Q.G. Fei
{"title":"Physics-informed neural networks for solving inverse problems in phase field models","authors":"B.R. Zhao , D.K. Sun , H. Wu , C.J. Qin , Q.G. Fei","doi":"10.1016/j.neunet.2025.107665","DOIUrl":"10.1016/j.neunet.2025.107665","url":null,"abstract":"<div><div>The integration of materials science with Physics-Informed Neural Networks (PINNs) is critical for understanding and predicting material properties, especially through the study of inverse problems. However, much of the current research in materials science primarily focuses on applying PINNs to forward problems or improving prediction accuracy. This paper shifts the focus to inverse problems related to numerical simulation modeling, encompassing diffusion, flow, and phase transition problems through PINNs. By constructing a neural network that integrates data-driven and physics-driven modules, this study uncovers the underlying physical laws embedded within the data. More importantly, this work further validates the applicability of PINNs in the inversion of key anisotropic material parameters, with benchmark anisotropic function inversion results demonstrating a high degree of consistency between predicted and theoretical values. Additionally, this study extends the application of PINNs to multi-physics coupled systems by addressing inverse problems associated with the governing equations of phase field, temperature field, and flow field, thereby enabling parameter inversion under multi-physics conditions. This novel approach addressing inverse problems and the inversion of critical material parameters provides new perspectives, demonstrating the potential of integrating numerical simulation data and deep learning, further deepening the research on PINNs in material science.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107665"},"PeriodicalIF":6.0,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-14DOI: 10.1016/j.neunet.2025.107669
Fei Wang, Gui-Fu Lu
{"title":"Scalable one-pass multi-view clustering with tensorized multiscale bipartite graphs fusion","authors":"Fei Wang, Gui-Fu Lu","doi":"10.1016/j.neunet.2025.107669","DOIUrl":"10.1016/j.neunet.2025.107669","url":null,"abstract":"<div><div>In the existing multi-view clustering task, anchor-based methods are widely used for large-scale data processing to reduce computational complexity and achieve satisfactory results. However, most existing anchor-based algorithms generate a single-scale bipartite graph for each view, limiting a more accurate representation of the original data. Moreover, these algorithms typically require further clustering processing, and the contribution of each view to the final clustering result is static, lacking dynamic adjustment based on the data characteristics. To address the above issues, we introduce an innovative multi-view clustering method called Scalable One-pass Multi-View Clustering with Tensorized Multiscale Bipartite Graphs Fusion (SOMVC/TMBGF). Specifically, we initially generate multiple scales of bipartite graphs for each view and adaptively fuse them to obtain a partition matrix, thereby fully leveraging the structural information of the original data for a more accurate representation. Subsequently, we combine the partition matrices from each view into a tensor constrained with Tensor Schatten <span><math><mi>p</mi></math></span>-norm, capturing the higher-order correlations and complementary information between views. Finally, to enhance clustering performance, we integrate partition matrix learning and clustering into a unified framework, dynamically adjusting the contribution of each view’s partition matrix through weighted spectral rotation to obtain the final clustering result. Experimental results show that SOMVC/TMBGF outperforms existing methods significantly in both clustering performance and computational efficiency, demonstrating its advantage in handling large-scale multi-view data.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107669"},"PeriodicalIF":6.0,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-14DOI: 10.1016/j.neunet.2025.107694
Jiazhi Zhang , Yuhu Cheng , C.L. Philip Chen , Hengrui Zhang , Xuesong Wang
{"title":"Diffusion policy distillation for offline reinforcement learning","authors":"Jiazhi Zhang , Yuhu Cheng , C.L. Philip Chen , Hengrui Zhang , Xuesong Wang","doi":"10.1016/j.neunet.2025.107694","DOIUrl":"10.1016/j.neunet.2025.107694","url":null,"abstract":"<div><div>Offline reinforcement learning aims to learn a well-performing target policy from a static empirical dataset. Leveraging its powerful distribution expression capabilities, the diffusion model has been widely adopted as a type of policy in offline reinforcement learning. However, sampling a single action from diffusion policy necessitates a multi-step denoise process, which results in slow decision-making speed and poses challenges for application in real-time control tasks. Inspired by the teacher–student mechanism in human learning, this paper proposes a diffusion policy distillation (DPD) framework, which employs a deterministic policy to distill the target policy induced by the diffusion model. Although the deterministic policy cannot express the complex behavior policy induced by the empirical dataset properly, it can effectively learn a relevant target policy. Moreover, since the distillated deterministic policy is one-step, it avoids the need for iterative denoising, thereby inheriting the performance of the target policy while effectively improving the decision-making speed. DPD is plug-and-play and thus can be combined with offline reinforcement learning methods based on diffusion policy. Experimental results on D4RL Gym-MuJoCo datasets indicate that the distillation policy can achieve a higher normalized score than the original policy with a lower standard deviation, and improve the decision-making speed by over 10 times.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107694"},"PeriodicalIF":6.0,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-14DOI: 10.1016/j.neunet.2025.107691
Dongdong Gao , Fanchao Kong , Tingwen Huang
{"title":"Convergent adaptive control based prescribed-time synchronization of switched fuzzy competitive network systems with time-varying delays","authors":"Dongdong Gao , Fanchao Kong , Tingwen Huang","doi":"10.1016/j.neunet.2025.107691","DOIUrl":"10.1016/j.neunet.2025.107691","url":null,"abstract":"<div><div>This paper addresses the prescribed-time control problem for discontinuous fuzzy neutral-type competitive neural networks (FNTCNNs) featuring switchings and time-varying delays. Notably, FNTCNNs constitute a generalized class of singularly perturbed Filippov systems. The establishment of a prescribed-time stability lemma for time-varying delay singularly perturbed systems remains a critical yet unresolved challenge. To address this, we first develop a novel prescribed-time stability lemma for singularly perturbed Filippov systems using adjustment functions, the comparison principle, and inequality techniques. This is achieved through the application of the one-norm and the introduction of a new stability definition for such systems. Considering the switching law inherent in FNTCNNs, we achieve prescribed-time stabilization control by designing adaptive prescribed-time control strategies, employing differential inclusion theory and Filippov’s solution framework. The proposed adaptive control strategies demonstrate convergence properties, ensuring that both the control strategies and system state variables converge to zero within the same prescribed-time interval. These newly developed strategies offer significant advantages over existing approaches. Finally, we validate our principal results through numerical simulations of second-order multi-agent systems subject to discontinuous disturbances.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107691"},"PeriodicalIF":6.0,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-13DOI: 10.1016/j.neunet.2025.107698
Sichun Du, Yu Dong, Pingdan Xiao, Zhengmiao Wei, Qinghui Hong
{"title":"A general analog solver of linear and quadratic programming in one step","authors":"Sichun Du, Yu Dong, Pingdan Xiao, Zhengmiao Wei, Qinghui Hong","doi":"10.1016/j.neunet.2025.107698","DOIUrl":"10.1016/j.neunet.2025.107698","url":null,"abstract":"<div><div>Real-time solving of linear programming (LP) and quadratic programming (QP) problems faces critical demand across engineering and scientific domains. Conventional numerical approaches suffer from exponential growth in computational complexity as problem dimensionality and structural complexity increase. To address this challenge, we present a general analog solver grounded in neurodynamic principles, achieving closed-form solutions for both LP and QP through physical-level computation in one step. The proposed solver achieves the solution of LP/QP problems under diverse constraints through configurable interconnections of modular analog circuits. The analog computing architecture based on continuous-time dynamics leverages its inherent parallelism and sub-microsecond convergence properties to enhance the efficiency of optimization problem solving. Through five PSPICE simulation test experiments, the proposed QP solver achieved an average solution accuracy exceeding 99.9%, with robustness metrics maintaining over 93% precision when subjected to circuit nonidealities, including noise, parasitic resistance, and device deviation. Comparative analysis shows that the proposed solver demonstrates 173.572<span><math><mo>×</mo></math></span>, 115.871<span><math><mo>×</mo></math></span>, 8.387<span><math><mo>×</mo></math></span>, 3.241<span><math><mo>×</mo></math></span>, 21.623<span><math><mo>×</mo></math></span>, respectively, acceleration over traditional QP solvers.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107698"},"PeriodicalIF":6.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144312914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-13DOI: 10.1016/j.neunet.2025.107764
Rui He, Xuanhe Li, Jian Wu
{"title":"LEESDFormer: A lightweight unsupervised CNN-Transformer-based curve estimation network for low-light image enhancement, exposure suppression, and denoising","authors":"Rui He, Xuanhe Li, Jian Wu","doi":"10.1016/j.neunet.2025.107764","DOIUrl":"10.1016/j.neunet.2025.107764","url":null,"abstract":"<div><div>Current low-light image enhancement methods mainly focus on improving the low-light regions within images. However, they often fail to adequately consider the impact of mixed exposures and noise on the images, resulting in suboptimal enhancement results and even loss of some detailed information. Moreover, these methods predominantly rely on convolutional neural networks (CNNs), which have inherent constraints in capturing long-range dependencies and global information. To address these issues, this paper introduces LEESDFormer, the first unsupervised low-light image enhancement method based on CNN-Transformer. Firstly, we propose a Low-light Enhancement and Exposure Suppression S-shaped curve (LEES-S curve), which simplifies the complex challenge of low-light enhancement and exposure suppression into a simpler curve estimation task, thus substantially reducing the task's complexity. LEESDFormer iterates the LEES-S curve through the Low-light Image Enhancement and Exposure Suppression Module (LEESM), thereby achieving desired enhancement effects. Subsequently, the Image Denoising Module (IDM) is employed to denoise the enhanced images. Extensive experiments demonstrate that our method exhibits excellent robustness, generalization capabilities, and visual effects compared to state-of-the-art unsupervised low-light image enhancement methods, even outperforming some supervised learning approaches. Notably, our method achieves a Peak Signal-to-Noise Ratio (PSNR) of <strong>21 dB</strong> on the LOL-v2-real dataset, demonstrating its superior enhancement performance and denoising capability. Furthermore, LEESDFormer is simple and efficient, with only <strong>65 K</strong> parameters, and processes each image in merely <strong>8 ms</strong>, making it deployable on resource-limited devices and having significant practical value.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107764"},"PeriodicalIF":6.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer learning-motivated intelligent fault diagnosis framework for cross-domain knowledge distillation","authors":"Penghao Wu , Engang Tian , Hongfeng Tao , Yiyang Chen","doi":"10.1016/j.neunet.2025.107699","DOIUrl":"10.1016/j.neunet.2025.107699","url":null,"abstract":"<div><div>Transfer learning, as a transformative learning paradigm, has revolutionized the application of artificial intelligence (AI) frameworks, garnering widespread adoption across diverse fields over the past decade. Intelligent fault diagnosis (IFD) methods based on transfer learning have substantially improved the stability and reliability of industrial automation processes. In this study, a transfer learning-based methodology tailored is proposed for nonlinear system fault diagnosis. The framework integrates cross-domain knowledge distillation into an IFD scheme, further embedding a twin-spiking neural networks (SNNs) to enhance temporal sequence analysis capabilities. By transforming the prior knowledge learned within the feature extraction backbone network and transferring it to the twin SNNs, this integration facilitates the reconstruction of residual generators for IFD. Experimental evaluations on nonlinear systems in high-energy vehicle lithium batteries demonstrate the effectiveness and practicality of the proposed approach.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107699"},"PeriodicalIF":6.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-13DOI: 10.1016/j.neunet.2025.107696
Sicheng Pan, Yingming Li
{"title":"Burst denoising transformer with multi-task optical flow estimation","authors":"Sicheng Pan, Yingming Li","doi":"10.1016/j.neunet.2025.107696","DOIUrl":"10.1016/j.neunet.2025.107696","url":null,"abstract":"<div><div>Burst denoising focuses on producing a clean image from a series of noisy frames captured in rapid succession. A major challenge during burst capturing is the misalignment between frames, caused by subtle movements of the camera or the scene. To deal with this difficulty, in this paper we introduce a novel Burst Denoising Transformer (BDFormer) network. First, we introduce a Transformer-based Multi-task Optical Flow Estimation module (TMOFE) to align the frames, where an auxiliary denoising task is used to reduce the impact of noise during optical flow estimation. Next, the aligned frames are passed through a Transformer-based Feature Enrichment module (TFE). The core unit of TFE lies in a specially-designed Spatial and Channel-wise Transformer Block (SCTB), which combines an FFT-based Spatial Transformer Block (FSTB) and a Channel-wise Transformer Block (CTB), in order to fully leverage both spatial and channel-wise global information across inter- and intra-frames. Extensive experiments show that BDFormer outperforms other transformer-based methods, achieving superior performance while maintaining low computational complexity.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107696"},"PeriodicalIF":6.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-06-13DOI: 10.1016/j.neunet.2025.107765
Bing Hu , Lixin Han , Yi Xu , Chang Tang , Jun Zhu , Gui-Fu Lu
{"title":"Structure regularized consensus dynamic anchor graph learning for incomplete multi-view clustering","authors":"Bing Hu , Lixin Han , Yi Xu , Chang Tang , Jun Zhu , Gui-Fu Lu","doi":"10.1016/j.neunet.2025.107765","DOIUrl":"10.1016/j.neunet.2025.107765","url":null,"abstract":"<div><div>Dynamic anchor graph-based incomplete multi-view clustering (IMVC) algorithms have garnered extensive research attention in recent years owing to their relatively low time complexity. However, these algorithms suffer from two limitations. First, most of the existing methods disregard the structural information of the original feature spaces. Second, nearly all the current approaches emphasize the importance of each view while overlooking the weights of each feature. To address these issues, we propose an algorithm named SRCDAGL-IMC. Specifically, we use the structural information of all the views as regularization terms to constrain the relationships between different pairs in the consensus anchor graph. Moreover, we add coefficients to each sample to measure their individual importance in their own view, and we simultaneously recover the missing features. Thus, the learning of the consensus anchor graph, and the recovery of the missing features, mutually promote each other. We also propose an effective alternating optimization method. Experiments on six public datasets show that our algorithm outperforms the state-of-the-art matrix factorization-based incomplete multi-view algorithms in terms of accuracy, normalized mutual information, purity. Our code is publicly available on <span><span>https://github.com/BingHuAhpu/SRCDAGL-IMC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107765"},"PeriodicalIF":6.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144312739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}