Neurocomputing最新文献

筛选
英文 中文
NamPnP: Noise-Aware mechanism within Plug-and-Play framework for image enhancement NamPnP:即插即用的图像增强框架内的噪声感知机制
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-17 DOI: 10.1016/j.neucom.2025.130662
Chenping Zhao , Yan Wang , Guohong Gao , Xixi Jia , Lijun Xu , Jianping Wang , Xiaofang Li
{"title":"NamPnP: Noise-Aware mechanism within Plug-and-Play framework for image enhancement","authors":"Chenping Zhao ,&nbsp;Yan Wang ,&nbsp;Guohong Gao ,&nbsp;Xixi Jia ,&nbsp;Lijun Xu ,&nbsp;Jianping Wang ,&nbsp;Xiaofang Li","doi":"10.1016/j.neucom.2025.130662","DOIUrl":"10.1016/j.neucom.2025.130662","url":null,"abstract":"<div><div>Low-light Image Enhancement (LIE) strives to improve contrast and restore details for images captured in dark conditions. Most of the previous LIE algorithms were developed based on the Retinex theory, which decomposes the observed image into illumination and reflectance components for pertinent processing. However, most of such methods that address the noise issue of the reflectance component regard the noise as Gaussian noise, which limits the applicability to diverse noise conditions. In this paper, we employ an appropriate noise degradation model in the designed Noise-Aware network to achieve the suppression of various noises in real-world scenarios. Specifically, the designed network leverages the powerful modeling capabilities of the Transformer to better integrate with the proposed degradation model, effectively eliminating noise with unknown distributions in real-world scenarios. Subsequently, it is plugged into the Retinex-based framework to achieve better enhancement performance. Additionally, the proposed method incorporates an edge-guided adaptive weight matrix and an iterative process to regularize the illumination component, resulting in more natural decomposition and further promoting the harmonious integration of illumination and reflectance components. Extensive evaluations on public datasets reveal that the proposed method outperforms existing techniques both qualitatively and quantitatively, demonstrating superior performance in noise removal under dark conditions while preserving finer texture and structural details.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130662"},"PeriodicalIF":5.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSRDR-GAN: Global search result diversification ranking approach based on multi-head self-attention and GAN GSRDR-GAN:基于多头自关注和GAN的全局搜索结果多样化排序方法
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-17 DOI: 10.1016/j.neucom.2025.130723
Weidong Liu , Jinzhong Li , Shengbo Chen
{"title":"GSRDR-GAN: Global search result diversification ranking approach based on multi-head self-attention and GAN","authors":"Weidong Liu ,&nbsp;Jinzhong Li ,&nbsp;Shengbo Chen","doi":"10.1016/j.neucom.2025.130723","DOIUrl":"10.1016/j.neucom.2025.130723","url":null,"abstract":"<div><div>Search result diversification ranking aims to generate rankings that comprehensively cover multiple subtopics, but existing methods often struggle to balance ranking diversity with relevance and face challenges in modeling document interactions and dealing with limited high-quality training data. While GAN have proven highly successful in fields like computer vision, their application to search result diversification has been limited due to the discrete nature of ranking items and the complex interactions among documents. To address these challenges, we propose GSRDR-GAN, a novel approach that integrates multi-head self-attention with GAN. Our method consists of four key components designed to address the limitations of traditional approaches: the Selected Document State Retriever, the Subtopic Encoder with Multi-head Self-Attention, the Subtopic Decoder with Multi-head Self-Attention, and the Relevance Predictor. First, a self-attention-based feature extraction module is employed to enhance document representations, enabling the model to capture both global and local context effectively. Second, a GAN framework is introduced to improve generalization by generating diverse rankings, mitigating limited high-quality training data. Third, a carefully designed reward function optimizes the trade-off between ranking diversity and relevance, allowing the model to adaptively prioritize these competing objectives during training. Notably, the method improves the generator’s stability and the diversity of search results by reducing training variance, even without pre-trained models. Extensive experiments on the TREC Web Track dataset demonstrate that the proposed GSRDR-GAN method significantly enhances result diversity, achieving relative improvements of 1.7% in <span><math><mi>α</mi></math></span>-nDCG, 3.0% in ERR-IA, 3.3% in NRBP, and 0.9% in S-rec over strong baseline methods. Ablation studies and comparative analyses of different reward computation methods further validate the effectiveness of the proposed approach.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130723"},"PeriodicalIF":5.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSAD: Temporal–spatial association differences-based unsupervised anomaly detection for multivariate time-series 基于时空关联差异的多变量时间序列无监督异常检测
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130611
Hanbing Zhu, Nan Xiao, Hefei Ling, Zongyi Li, Yuxuan Shi, Chuang Zhao, Hongxu Ji, Ping Li, Hui Liu
{"title":"TSAD: Temporal–spatial association differences-based unsupervised anomaly detection for multivariate time-series","authors":"Hanbing Zhu,&nbsp;Nan Xiao,&nbsp;Hefei Ling,&nbsp;Zongyi Li,&nbsp;Yuxuan Shi,&nbsp;Chuang Zhao,&nbsp;Hongxu Ji,&nbsp;Ping Li,&nbsp;Hui Liu","doi":"10.1016/j.neucom.2025.130611","DOIUrl":"10.1016/j.neucom.2025.130611","url":null,"abstract":"<div><div>Modern industrial control systems are vast and intricate, requiring the monitoring of data from numerous interconnected sensors and actuators for precise intrusion and anomaly detection. While unsupervised time series anomaly detection methods based on deep learning effectively capture complex nonlinear contextual dependencies, the anomaly metrics employed by current methods lack contextual anomaly information, thereby hindering the distinction between anomalies and normalies. Addressing this issue, a Temporal–Spatial Association Differences-based Anomaly Detection model (TSAD) is proposed. This model introduces temporal association difference learning, capturing the temporal association distribution of normal sequences while considering temporal association loss to calculate temporal association differences. Additionally, it incorporates spatial association difference learning, capturing the spatial association distribution of normal sequences while considering spatial association loss to calculate spatial association differences. By focusing on extracting temporal–spatial association patterns from multivariate time-series data under normal operating conditions, the model aggregates reconstruction errors and temporal–spatial association differences during testing to detect anomalies using a novel anomaly metric. Experimental results on four real-world datasets (SWaT, WADI, PSM, and MSL) demonstrate the state-of-the-art performance of the approach.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130611"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open set domain adaptation via unknown construction and dynamic threshold estimation 基于未知构造和动态阈值估计的开集域自适应
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130668
Yong Zhang , Qi Zhang , Wenzhe Liu
{"title":"Open set domain adaptation via unknown construction and dynamic threshold estimation","authors":"Yong Zhang ,&nbsp;Qi Zhang ,&nbsp;Wenzhe Liu","doi":"10.1016/j.neucom.2025.130668","DOIUrl":"10.1016/j.neucom.2025.130668","url":null,"abstract":"<div><div>Open set domain adaptation (OSDA) focuses on adapting a model from the source domain to the target domain when their class distributions differ. The goal is to accurately recognize unknown classes while correctly classifying known classes. Existing research has indicated that adversarial networks can be efficient for unknown class recognition, yet threshold setting remains a challenge. We address this challenge by proposing an OSDA method that uses unknown construction and dynamic threshold estimation (UCDTE), which consists of three stages: unknown construction, dynamic threshold estimation, and distribution alignment. In the first stage, known as unknown construction, pseudo-unknown samples are constructed through feature fusion to learn information regarding the unknown class. In the second stage, dynamic threshold estimation, an unknown discriminator is constructed to further explore different semantic information in the unknown classes, and a dynamic threshold is generated for each target sample by combining it with the domain discriminator. Finally, in the distribution alignment stage, the dynamic threshold adversarial network aligns known samples between the source and target domains while reducing the intra-class gap of unknown samples in the target domain. Experiments conducted on three datasets have demonstrated the robustness and effectiveness of our approach in adapting models across different domains.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130668"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-based hyper-parameter adaptation of multi-stage differential evolution by deep reinforcement learning 基于深度强化学习的多阶段差分进化的知识超参数自适应
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130633
Mingzhang Han, Mingjie Fan, Xinchao Zhao, Lingjuan Ye
{"title":"Knowledge-based hyper-parameter adaptation of multi-stage differential evolution by deep reinforcement learning","authors":"Mingzhang Han,&nbsp;Mingjie Fan,&nbsp;Xinchao Zhao,&nbsp;Lingjuan Ye","doi":"10.1016/j.neucom.2025.130633","DOIUrl":"10.1016/j.neucom.2025.130633","url":null,"abstract":"<div><div>Differential evolution (DE) is a prominent algorithm in evolutionary computation, with adaptive control mechanisms for its operators and parameters being a critical research focus due to their impact on performance. Existing studies often rely on trial-and-error methods or deep reinforcement learning (DRL) for per-generation adaptive control, yet they inadequately explore adaptive hyper-parameter tuning across different stages of the evolution process. To address this limitation, this paper presents a knowledge-based framework named DRL-HP-* for multi-stage DE hyper-parameter adaptation using DRL. The framework divides the algorithm’s search procedure into multiple equal stages, where a DRL agent determines hyper-parameters in each stage based on five types of states that characterize the evolutionary process. A novel reward function is designed to comprehensively train the agent across all training functions, integrating the performance of the backbone algorithm. This approach results in the development of three new algorithms (DRL-HP-jSO, DRL-HP-LSHADE-RSP, and DRL-HP-EjSO). Experimental evaluations on the CEC’18 benchmark suite demonstrate that the proposed algorithms outperform eight state-of-the-art methods, demonstrating superior optimization performance. Further extensive experiments validate the effectiveness of the designed reward function and the framework’s scalability and robustness, highlighting its contribution to enabling stage-wise adaptive hyper-parameter control.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130633"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural adaptive delay differential equations 神经自适应延迟微分方程
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130634
Chao Zhou, Qieshi Zhang, Jun Cheng
{"title":"Neural adaptive delay differential equations","authors":"Chao Zhou,&nbsp;Qieshi Zhang,&nbsp;Jun Cheng","doi":"10.1016/j.neucom.2025.130634","DOIUrl":"10.1016/j.neucom.2025.130634","url":null,"abstract":"<div><div>Continuous-depth neural networks, such as neural ordinary differential equations (NODEs), have garnered significant interest in recent years owing to their ability to bridge deep neural networks with dynamical systems. This study introduced a new type of continuous-depth neural network called neural adaptive delay differential equations (NADDEs). Unlike recently proposed neural delay differential equations (NDDEs) that require a fixed delay, NADDEs utilize a learnable, adaptive delay. Specifically, NADDEs reformulate the learning process as a delay-free optimal control problem and leverage the calculus of variations to derive their learning algorithms. This approach enables the model to autonomously identify suitable delays for given tasks, thereby establishing more flexible temporal dependencies to optimize the utilization of historical representations. The proposed NADDEs can reconstruct dynamical systems with time-delay effects by learning true delays from data, a capability beyond both NODEs and NDDEs, and achieve superior performance on concentric and image-classification datasets, including MNIST, CIFAR-10, and SVHN.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130634"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-rank tensor recovery via jointing the non-convex regularization and deep prior 结合非凸正则化和深度先验的低秩张量恢复
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130610
Qing Liu, Huanmin Ge, Xinhua Su
{"title":"Low-rank tensor recovery via jointing the non-convex regularization and deep prior","authors":"Qing Liu,&nbsp;Huanmin Ge,&nbsp;Xinhua Su","doi":"10.1016/j.neucom.2025.130610","DOIUrl":"10.1016/j.neucom.2025.130610","url":null,"abstract":"<div><div>This paper addresses the low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA), which have broad applications in the recovery of real-world multi-dimensional data. To enhance recovery performance, we propose novel non-convex tensor recovery models for both LRTC and TRPCA by combining low-rank priors with data-driven deep priors. Specifically, we use the tensor <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mi>r</mi></mrow><mrow><mi>p</mi></mrow></msubsup></math></span> pseudo-norm to effectively capture the low-rank structure of the tensor, providing a more accurate approximation of its rank. In addition, a convolutional neural network (CNN) denoiser is incorporated to learn deep prior information, further improving recovery accuracy. We also develop efficient iterative algorithms for solving the proposed models based on the alternating direction method of multipliers (ADMM). Experimental results show that the proposed methods outperform state-of-the-art techniques in terms of recovery accuracy for both LRTC and TRPCA.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130610"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning hierarchical image feature for efficient image rectification 学习分层图像特征,实现有效的图像校正
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130646
Nanjun Yuan, Fan Yang, Yuefeng Zhang, Luxia Ai, Wenbing Tao
{"title":"Learning hierarchical image feature for efficient image rectification","authors":"Nanjun Yuan,&nbsp;Fan Yang,&nbsp;Yuefeng Zhang,&nbsp;Luxia Ai,&nbsp;Wenbing Tao","doi":"10.1016/j.neucom.2025.130646","DOIUrl":"10.1016/j.neucom.2025.130646","url":null,"abstract":"<div><div>Image stitching methods often use single-homography or multi-homography estimation for alignment, resulting in images with undesirable irregular boundaries. To address this, cropping and image inpainting are the common operations but discard image regions or introduce content that differs from reality. Recently, deep learning-based methods improve the content fidelity of the rectified images, while suffering from distortion, artifacts, and discontinuous deformations between adjacent image regions. In this work, we propose an efficient network based on the transformer (Rectformer) for image rectification. Specifically, we propose the Global and Local Features (GLF) module, which consists of the Hybrid Self-Attention module and Dynamic Convolution module to capture hierarchical image features. We further introduce two auxiliary losses for better image rectification, bidirectional contextual (BC) loss and deformation consistency (DC) loss. The bidirectional contextual loss encourages the model to preserve image local structure information. The loss of deformation consistency improves the network’s geometric recovery and generalization capabilities through a self-supervised learning strategy. Finally, extensive experiments demonstrate that our method outperforms the existing state-of-the-art methods for rotation correction and rectangling.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130646"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144331270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attributed graph clustering with multi-scale weight-based pairwise coarsening and contrastive learning 基于多尺度权值的两两粗化和对比学习的属性图聚类
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130796
Binxiong Li , Yuefei Wang , Binyu Zhao , Heyang Gao , Benhan Yang , Quanzhou Luo , Xue Li , Xu Xiang , Yujie Liu , Huijie Tang
{"title":"Attributed graph clustering with multi-scale weight-based pairwise coarsening and contrastive learning","authors":"Binxiong Li ,&nbsp;Yuefei Wang ,&nbsp;Binyu Zhao ,&nbsp;Heyang Gao ,&nbsp;Benhan Yang ,&nbsp;Quanzhou Luo ,&nbsp;Xue Li ,&nbsp;Xu Xiang ,&nbsp;Yujie Liu ,&nbsp;Huijie Tang","doi":"10.1016/j.neucom.2025.130796","DOIUrl":"10.1016/j.neucom.2025.130796","url":null,"abstract":"<div><div>This study introduces the Multi-Scale Weight-Based Pairwise Coarsening and Contrastive Learning (MPCCL) model, a novel approach for attributed graph clustering that effectively bridges critical gaps in existing methods, including long-range dependency, feature collapse, and information loss. Traditional methods often struggle to capture high-order graph features due to their reliance on low-order attribute information, while contrastive learning techniques face limitations in feature diversity by overemphasizing local neighborhood structures. Similarly, conventional graph coarsening methods, though reducing graph scale, frequently lose fine-grained structural details. MPCCL addresses these challenges through an innovative multi-scale coarsening strategy, which progressively condenses the graph while prioritizing the merging of key edges based on global node similarity to preserve essential structural information. It further introduces a one-to-many contrastive learning paradigm, integrating node embeddings with augmented graph views and cluster centroids to enhance feature diversity, while mitigating feature masking issues caused by the accumulation of high-frequency node weights during multi-scale coarsening. By incorporating a graph reconstruction loss and KL divergence into its self-supervised learning framework, MPCCL ensures cross-scale consistency of node representations. Experimental evaluations reveal that MPCCL achieves a significant improvement in clustering performance, including a remarkable 15.24 % increase in NMI on the ACM dataset and notable robust gains on smaller-scale datasets such as Citeseer, Cora and DBLP. In the large-scale Reuters dataset, it significantly improved by 17.84 %, further validating its advantage in enhancing clustering performance and robustness. These results highlight MPCCL’s potential for application in diverse graph clustering tasks, ranging from social network analysis to bioinformatics and knowledge graph-based data mining. The source code for this study is available at <span><span>https://github.com/YF-W/MPCCL</span><svg><path></path></svg></span></div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130796"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local spatial self-attention based deep network for meteorological data downscaling 基于局部空间自关注的深度网络气象数据降尺度
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-06-16 DOI: 10.1016/j.neucom.2025.130653
Sheng Gao, Lianlei Lin, Zongwei Zhang, Junkai Wang, Hanqing Zhao, Hangyi Yu
{"title":"Local spatial self-attention based deep network for meteorological data downscaling","authors":"Sheng Gao,&nbsp;Lianlei Lin,&nbsp;Zongwei Zhang,&nbsp;Junkai Wang,&nbsp;Hanqing Zhao,&nbsp;Hangyi Yu","doi":"10.1016/j.neucom.2025.130653","DOIUrl":"10.1016/j.neucom.2025.130653","url":null,"abstract":"<div><div>High-resolution meteorological data are essential for simulation and decision-making in weather-sensitive industries such as agriculture and forestry. However, existing meteorological products typically have low spatial resolution (coarser than 0.1°), making it difficult to capture the fine-grained spatial distribution of meteorological variables. Most existing deep learning-based downscaling methods treat the task as an image super-resolution problem, overlooking key characteristics of meteorological data, such as multi-scale local spatial correlation, local–global spatial dependency, and the complex relationship between terrain and meteorological fields, thus limiting modeling accuracy. To address this issue, this paper proposes a deep neural network based on local spatial self-attention, LSSANet, for the spatial downscaling of meteorological data. Specifically, the Local Spatial Self-Attention Module (LSAM) is proposed to capture local–global spatial correlations of meteorological fields. The Multi-scale Dynamic Aggregation Module (MDAM) is introduced to handle multi-scale local spatial dependencies. Furthermore, an elevation embedding layer and a two-stage training strategy are developed to integrate the relationship between terrain and the meteorological field. Experimental results show that LSSANet achieves superior performance compared to traditional and state-of-the-art methods. In the 4<span><math><mo>×</mo></math></span> downscaling task, LSSANet reduces MAE by 5.1%–75.8%; in the 8<span><math><mo>×</mo></math></span> task, by 4.3%–59.7%; and in the 16<span><math><mo>×</mo></math></span> task, by 1.9%–53.4%. Engineering application experiments further demonstrate that the proposed method can generate high-resolution future meteorological forecasts based on the GFS product. These results indicate that LSSANet can accurately reconstruct or predict high-resolution meteorological fields in specific regions, providing valuable support for planning and decision-making in meteorology-sensitive industries.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130653"},"PeriodicalIF":5.5,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144469902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信