NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.131002
Huaqiang Xie , Kangwei Wang , Li Zhu , Jie Xie , Cheng Wu , Jie Sheng , Jin Zhang
{"title":"Two-stage real-world image dehazing method using physics-based dehazing network and contrastive learning generative adversarial network","authors":"Huaqiang Xie , Kangwei Wang , Li Zhu , Jie Xie , Cheng Wu , Jie Sheng , Jin Zhang","doi":"10.1016/j.neucom.2025.131002","DOIUrl":"10.1016/j.neucom.2025.131002","url":null,"abstract":"<div><div>Real-world image dehazing remains a challenging task due to the ill-posed nature of haze formation and the significant domain gap between synthetic and real foggy scenes. In this paper, a novel two-stage framework is proposed, integrating a Physics-Based Dehazing Network (PBDNet) with a Contrastive Learning-based Generative Adversarial Network (CLGAN). In the first stage, PBDNet is trained on synthetic hazy-clean pairs using the atmospheric scattering model, extracting interpretable and transferable physical priors. In the second stage, CLGAN leverages these priors to guide unpaired image translation between real hazy and clean images. The integration of contrastive learning further enhances the alignment of fog-invariant representations, improving dehazing stability and generalization. Extensive experiments demonstrate the effectiveness of our approach. On the SOTS-outdoor dataset, our method achieves a PSNR of 34.13 dB and SSIM of 0.9863, surpassing state-of-the-art methods. On the real-world RTTS dataset, it achieves a BRISQUE score of 17.54, indicating superior perceptual quality. Additional evaluations using FADE metrics and object detection tasks confirm the practical value of our method in real-world scenarios. These results validate the effectiveness of combining physics-based priors with contrastive learning for robust real-world dehazing.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131002"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.130879
Wenyi Tang , Haocheng Pei , Xin Wang , Zaobo He , Lei Yu , Xinsong Yang
{"title":"Reducing hubness to improve inductive few-shot learning","authors":"Wenyi Tang , Haocheng Pei , Xin Wang , Zaobo He , Lei Yu , Xinsong Yang","doi":"10.1016/j.neucom.2025.130879","DOIUrl":"10.1016/j.neucom.2025.130879","url":null,"abstract":"<div><div>Few-Shot Learning (FSL) provides a promising paradigm that completes a task using a few labeled samples, benefiting object recognition, outlier detection, and various tasks. However, distance-based FSL is vulnerable to the hubness problem, <em>i.e.,</em> a few points (hubs from one class) occur frequently among the nearest neighbors of other points (from other classes) in the high-dimensional space. Therefore, the hubs commonly mislead classifiers resulting in a considerable performance degradation. The major body of current solutions is not designed to approach zero hubness, or limited to the transductive paradigm. As a countermeasure, this work aims to approach zero hubness for inductive FSL explicitly. We propose a new optimization objective that restricts the embeddings to approach zero hubness. A holistic Hubness rEDucing (HED) method is proposed to embed representations on the hypersphere and maintain linear separability, resulting in the decrease of hubness and preservation of class structure. A calibration mechanism is further devised to mitigate the negative impact of FSL’s data limitation, which calibrates the potentially biased distribution of embeddings. Extensive evaluation results demonstrate that our proposed method reduces hubness to improve the inductive FSL, and meanwhile, is compatible with a wide range of backbones.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130879"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144672052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.131036
Mazbahur Rahman Khan, Azhar Mohd Ibrahim, Suaib Al Mahmud, Farah Asyiqin Samat, Farahiyah Jasni, Muhammad Imran Mardzuki
{"title":"Advancing mobile robot navigation with DRL and heuristic rewards: A comprehensive review","authors":"Mazbahur Rahman Khan, Azhar Mohd Ibrahim, Suaib Al Mahmud, Farah Asyiqin Samat, Farahiyah Jasni, Muhammad Imran Mardzuki","doi":"10.1016/j.neucom.2025.131036","DOIUrl":"10.1016/j.neucom.2025.131036","url":null,"abstract":"<div><div>Robotic navigation is a critical component of autonomy, requiring efficient and safe mobility across diverse environments. The advent of Deep Reinforcement Learning (DRL) has spurred significant research into enabling mobile robots to learn effective navigation by optimizing actions based on environmental rewards. DRL has shown promise in addressing challenges such as dynamic environments and cooperative exploration. However, traditional DRL-based navigation faces several limitations, including the need for extensive training data, susceptibility to local traps in complex environments, low transferability to real-world scenarios, slow convergence, and low learning efficiency. Additionally, designing an appropriate reward function to achieve desired behaviors without unintended consequences remains complex; poorly designed rewards can lead to suboptimal or harmful outcomes. Recent studies have explored integrating heuristic search-based rewards into DRL algorithms to mitigate these issues. This study reviews the limitations of traditional DRL navigation and explores recent advancements in integrating heuristic search to design dynamic reward functions that enhance robot learning processes.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131036"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.130889
Yihao Luo , Yuning Qiu , Peilin Yang , Hongxia Rao , Zhenhao Huang , Guoxu Zhou
{"title":"Latent low-rank tensor wheel decomposition for visual data completion","authors":"Yihao Luo , Yuning Qiu , Peilin Yang , Hongxia Rao , Zhenhao Huang , Guoxu Zhou","doi":"10.1016/j.neucom.2025.130889","DOIUrl":"10.1016/j.neucom.2025.130889","url":null,"abstract":"<div><div>Recently, tensor wheel (TW) decomposition gains increasing attention in the area of low-rank tensor completion (LRTC). Existing tensor factorization-based methods can either capture the global connections among all dimension-pairs of data or the local connections between adjacent modes only via low-rank regularization. In this paper, we propose a novel TW decomposition with latent low-rank factors, where the low-rank regularizations are incorporated in the gradient domain of ring factors to enhance the robustness of TW-ranks. Thus, the global low-rank structure of TW decomposition and local continuity of high-order tensors can be exploited in a unified framework. Additionally, an efficient alternating direction method of multipliers (ADMM) algorithm is developed to solve the optimization. Experimental results on real-world visual data such as color images, multispectral images (MSI), and video sequences have showcased the superiority of the proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130889"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.130953
Ke Pan , Yuxin Wen , Yiming Wang , Maoguo Gong , Hui Li , Shanfeng Wang
{"title":"Privacy-enhanced data distillation with probability distribution matching","authors":"Ke Pan , Yuxin Wen , Yiming Wang , Maoguo Gong , Hui Li , Shanfeng Wang","doi":"10.1016/j.neucom.2025.130953","DOIUrl":"10.1016/j.neucom.2025.130953","url":null,"abstract":"<div><div>Data distillation aims to condense the large-scale original dataset into a small-scale synthetic dataset while preserving as much data utility as possible. As one of the typical implementation mechanisms of data distillation, distribution matching works by aligning the feature distributions of synthetic and original samples, while avoiding the expensive computation and memory costs associated with other matching mechanisms. However, there still exist two primary limitations in distribution matching. On the one hand, distribution matching suffers from inadequate class discrimination, the synthetic samples within the same class may be misclassified as other classes due to the scattered feature distribution. On the other hand, distribution matching raises serious privacy concerns, as the synthetic dataset may inadvertently contain some sensitive information extracted from the original dataset. Taking this cue, we propose here a novel privacy-enhanced distribution matching-based data distillation algorithm. First, we design a probability distribution matching method with intra-class aggregation constraint and inter-class dispersion constraint based on symmetric Kullback-Leibler divergence to strengthen the performance of data distillation. Second, we design a dynamic noise perturbation method based on differential privacy to enhance data privacy guarantees while preserving higher sample quality. Extensive experiments demonstrate that our algorithm can achieve performance improvements of up to 4.5 % on the CIFAR10 dataset and 2.7 % on the SVHN dataset, compared to the state-of-the-art methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 130953"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.131024
Peiqiu Yu, Xiuyi Jia
{"title":"Semi-supervised label distribution learning via global factorization and local constrain","authors":"Peiqiu Yu, Xiuyi Jia","doi":"10.1016/j.neucom.2025.131024","DOIUrl":"10.1016/j.neucom.2025.131024","url":null,"abstract":"<div><div>In label distribution learning, properly handling samples with missing label distributions is a particularly challenging task. When dealing with unlabeled samples, leveraging correlation is especially crucial as it reveals the intrinsic patterns of data distribution and effectively reduces the model’s hypothesis space. Currently, semi-supervised label distribution learning follows the same correlation mining methods as those used under complete supervision. However, due to the lack of supervision information for some samples, these methods designed for complete supervision are insufficient in a semi-supervised context. On one hand, the absence of labels for some samples makes it difficult to mine label correlations; on the other hand, label correlations mined solely based on samples are biased, leading to imprecise label correlations due to the missing labels. To address these issues, this paper innovatively proposes two strategies for mining label correlations in semi-supervised label distribution learning: first, exploring the common correlations between known and unknown label distributions; second, using the information of known label distributions to reveal the correlations of unknown label distributions. Specifically, globally, we employ independent component analysis for matrix completion of missing sample labels, and locally, we improve the <span><math><mi>k</mi></math></span>-NN framework to utilize the label constraints of known label distributions to restrict the label distribution values of unknown label distributions. Based on these mined correlations, we designed a semi-supervised label distribution learning algorithm. The algorithm outperforms existing methods in 67.27 % of cases, achieving outstanding performance, and demonstrates significant statistical significance in two-sample <span><math><mi>t</mi></math></span>-tests.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131024"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.130886
N. Manoj , R. Sriraman , R. Gurusamy , Yilun Shang
{"title":"Further results on global stability of Clifford-valued neural networks subject to time-varying delays","authors":"N. Manoj , R. Sriraman , R. Gurusamy , Yilun Shang","doi":"10.1016/j.neucom.2025.130886","DOIUrl":"10.1016/j.neucom.2025.130886","url":null,"abstract":"<div><div>This paper investigates the global exponential and asymptotic stability of Clifford-valued neural networks (CLVNNs) with multiple time-varying delays. Due to the non-commutative nature of Clifford algebra, analyzing the stability and other dynamical properties of CLVNNs becomes challenging. To address this issue, we separate the CLVNNs into equivalent real-valued neural networks (RVNNs). This separation simplifies the study of CLVNNs through their RVNN components. By constructing a suitable Lyapunov–Krasovskii functionals (LKFs) and applying inequality techniques, we establish several sufficient conditions that guarantee the existence and uniqueness of the equilibrium point (EP), as well as the global exponential and asymptotic stability of the considered neural networks (NNs). These conditions are expressed as linear matrix inequalities (LMIs), which can be efficiently verified using MATLAB LMI toolbox. To validate the analytical results, we present three numerical examples. Additionally, we propose a novel color image encryption algorithm, and demonstrate its effectiveness through simulation results and detailed performance analysis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130886"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.131021
Jin Yuan , Shikai Chen , Yicheng Jiang , Yang Zhang , Zhongchao Shi , Jianping Fan , Yong Rui
{"title":"Epistemic graph: A plug-and-play module for hybrid representation learning","authors":"Jin Yuan , Shikai Chen , Yicheng Jiang , Yang Zhang , Zhongchao Shi , Jianping Fan , Yong Rui","doi":"10.1016/j.neucom.2025.131021","DOIUrl":"10.1016/j.neucom.2025.131021","url":null,"abstract":"<div><div>In recent years, deep models have achieved remarkable success in various vision tasks. However, their performance heavily relies on large training datasets. In contrast, humans exhibit hybrid learning, seamlessly integrating structured knowledge for cross-domain recognition or relying on a smaller amount of data samples for few-shot learning. Motivated by this human-like epistemic process, we aim to extend hybrid learning to computer vision tasks by integrating structured knowledge with data samples for more effective representation learning. Nevertheless, this extension faces significant challenges due to the substantial gap between structured knowledge and deep features learned from data samples, encompassing both dimensions and knowledge granularity. In this paper, a novel Epistemic Graph Layer (EGLayer) is introduced to enable hybrid learning, enhancing the exchange of information between deep features and a structured knowledge graph. Our EGLayer is composed of three major parts, including a local graph module, a query aggregation model, and a novel correlation alignment loss function to emulate human epistemic ability. Serving as a plug-and-play module that can replace the standard linear classifier, EGLayer significantly improves the performance of deep models. Extensive experiments demonstrate that EGLayer can greatly enhance representation learning for the tasks of cross-domain recognition and few-shot learning, and the visualization of knowledge graphs can aid in model interpretation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131021"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131052
Fangxue Liu , Lei Fan
{"title":"A review of advancements in low-light image enhancement using deep learning","authors":"Fangxue Liu , Lei Fan","doi":"10.1016/j.neucom.2025.131052","DOIUrl":"10.1016/j.neucom.2025.131052","url":null,"abstract":"<div><div>In low-light environments, the performance of computer vision algorithms often deteriorates significantly, adversely affecting key vision tasks such as segmentation, detection, and classification. With the rapid advancement of deep learning, its application to low-light image processing has attracted widespread attention and seen significant progress in recent years. However, there remains a lack of comprehensive surveys that systematically examine how recent deep-learning-based low-light image enhancement methods function and evaluate their effectiveness in enhancing downstream vision tasks. To address this gap, this review provides a detailed elaboration on how various recent approaches (from 2020) operate and their enhancement mechanisms, supplemented with clear illustrations. It also investigates the impact of different enhancement techniques on subsequent vision tasks, critically analyzing their strengths and limitations. Our review found that image enhancement improved the performance of downstream vision tasks to varying degrees. Although supervised methods often produced images with high perceptual quality, they typically produced modest improvements in vision tasks. In contrast, zero-shot learning, despite achieving lower scores in image quality metrics, showed consistently boosted performance across various vision tasks. These suggest a disconnect between image quality metrics and those evaluating vision task performance. Additionally, unsupervised domain adaptation techniques demonstrated significant gains in segmentation tasks, highlighting their potential in practical low-light scenarios where labelled data is scarce. Observed limitations of existing studies are analysed, and directions for future research are proposed. This review serves as a useful reference for determining low-light image enhancement techniques and optimizing vision task performance in low-light conditions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131052"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131056
Kaifeng Wu , Lei Liu , Chengqing Liang , Lei Li
{"title":"UAV formation control based on ensemble reinforcement learning","authors":"Kaifeng Wu , Lei Liu , Chengqing Liang , Lei Li","doi":"10.1016/j.neucom.2025.131056","DOIUrl":"10.1016/j.neucom.2025.131056","url":null,"abstract":"<div><div>Based on the frameworks of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) and Deep Deterministic Policy Gradient (DDPG) algorithms, this paper investigates the UAV formation control problem. To address the convergence difficulties inherent in multi-agent algorithms, curriculum reinforcement learning is applied during the training phase to decompose the task into incremental stages. A progressively hierarchical reward function tailored for each stage is designed, significantly reducing the training complexity of MADDPG. In the inference phase, an ensemble reinforcement learning strategy is adopted to enhance the accuracy of UAV formation control. When the UAVs approach their target positions, the control strategy switches from MADDPG to the DDPG algorithm, thus achieving more efficient and precise control. Through ablation and comparative experiments in a self-developed Software in the Loop (SITL) simulation environment, the effectiveness and stability of the ensemble reinforcement learning algorithm in multi-agent scenarios are validated. Finally, real-world experiments further verify the practical applicability of the proposed algorithm (<span><span>https://b23.tv/7ceLpLe</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131056"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}