IEEE Signal Processing Letters最新文献

筛选
英文 中文
Dual-Branch Network for No-Reference Super-Resolution Image Quality Assessment
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-21 DOI: 10.1109/LSP.2025.3553432
Tong Tang;Fan Yang;Xinyu Lin;Weisheng Li
{"title":"Dual-Branch Network for No-Reference Super-Resolution Image Quality Assessment","authors":"Tong Tang;Fan Yang;Xinyu Lin;Weisheng Li","doi":"10.1109/LSP.2025.3553432","DOIUrl":"https://doi.org/10.1109/LSP.2025.3553432","url":null,"abstract":"No-reference super-resolution image quality assessment (SR-IQA) has become an critical technique for optimizing SR algorithms, the key challenge is how to comprehensively learn visual related features of SR image. Existing methods ignore the context information and feature correlation. To tackle this problem, this letter proposes a dual-branch network for no-reference super-resolution image quality assessment (DBSRNet). First, dual-branch feature extraction module is designed, where residual network and receptive field block net are combined to learn multi-scale local features, stacked vision transformer blocks are utilized to learn global features. Then, correlations between dual-branch features are learned and fused based on self-attention mechanism structure, final predicted score is obtained by adaptive feature pooling strategy. Finally, experimental results show that DBSRNet significantly outperforms State-of-the-Art methods in terms of prediction accuracy on all SR-IQA datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1366-1370"},"PeriodicalIF":3.2,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DNGG: Medical Image Lossless Encryption via Deep Network Guided Generative
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-18 DOI: 10.1109/LSP.2025.3552528
Lin Fan;Meng Li;Zhenting Hu;Yuan Hong;Dexing Kong
{"title":"DNGG: Medical Image Lossless Encryption via Deep Network Guided Generative","authors":"Lin Fan;Meng Li;Zhenting Hu;Yuan Hong;Dexing Kong","doi":"10.1109/LSP.2025.3552528","DOIUrl":"https://doi.org/10.1109/LSP.2025.3552528","url":null,"abstract":"Ensuring the security and integrity of medical images is crucial for telemedicine. Recently, deep learning-based image encryption techniques have significantly improved data transmission security. However, the unpredictability of complex models may lead to damage during image reconstruction, thereby negatively impacting medical diagnostics. To address this issue, we propose a lossless encryption algorithm for medical images, which is based on a guided image generative neural network. Initially, we designed a guided image generation network. Subsequently, we train a generator using random keys to produce a key map. This key map then guides the encryption of the secret image through a bitwise XOR (bit-XOR) algorithm, effectively merging the secret image with the key map. During the decryption process, the original image can be restored losslessly by using a key map generated from a random key. The experimental results show that the encryption algorithm greatly ensures the security of data and shows strong anti-attack ability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1331-1335"},"PeriodicalIF":3.2,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the Order of Modes in Tensor Train Decomposition
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-17 DOI: 10.1109/LSP.2025.3552005
Petr Tichavský;Ondřej Straka
{"title":"Optimizing the Order of Modes in Tensor Train Decomposition","authors":"Petr Tichavský;Ondřej Straka","doi":"10.1109/LSP.2025.3552005","DOIUrl":"https://doi.org/10.1109/LSP.2025.3552005","url":null,"abstract":"The tensor train (TT) is a popular way of representing high-dimensional hyper-rectangular data structures called tensors. It is widely used, for example, in quantum chemistry under the name “matrix product state”. The complexity of the TT model mainly depends on the bond dimensions that connect TT cores, constituting the model. Unlike canonical polyadic decomposition, the TT model complexity may depend on the order of the modes/indices in the data structures or the order of the core tensors in the TT, in general. This letter aims to provide methods for optimizing the order of the modes to reduce the bond dimensions. Since the number of possible orderings of the cores is exponentially high, we propose a greedy algorithm that provides a suboptimal solution. We consider three problem setups, i.e., specifications of the tensor: tensor given by a list of all its elements, tensor described by a TT model with some default order of the modes, and tensor obtained by sampling a multivariate function.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1361-1365"},"PeriodicalIF":3.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Level Weighted Low-Complexity Adaptive Beamforming Method
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-17 DOI: 10.1109/LSP.2025.3551957
Yuxi Du;Weijia Cui;Bin Ba;Yangyang Chen;Long Zhang
{"title":"A Two-Level Weighted Low-Complexity Adaptive Beamforming Method","authors":"Yuxi Du;Weijia Cui;Bin Ba;Yangyang Chen;Long Zhang","doi":"10.1109/LSP.2025.3551957","DOIUrl":"https://doi.org/10.1109/LSP.2025.3551957","url":null,"abstract":"Aiming at the problem of high computational complexity of adaptive beamforming techniques in array radar systems, a two-level weighted low-complexity adaptive beamforming method is proposed in this letter. First, the uniform linear array is divided into subarrays and each subarray has the same elements. The desired signal received by each array element in the subarray is then superimposed by compensating for the delay using a first level weighting. Finally, the interference signals and noise are suppressed using a second level weighting to obtain the ideal output signal to interference plus noise ratio. Simulation results verify the effectiveness and reliability of the proposed method.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1246-1250"},"PeriodicalIF":3.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCE-Net: A Dual-Frequency Domain Knowledge-Guided Framework for Image Dehazing via Detail and Content Enhancements
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-14 DOI: 10.1109/LSP.2025.3551201
Jianlei Liu;Yuting Pang;Shilong Wang
{"title":"DCE-Net: A Dual-Frequency Domain Knowledge-Guided Framework for Image Dehazing via Detail and Content Enhancements","authors":"Jianlei Liu;Yuting Pang;Shilong Wang","doi":"10.1109/LSP.2025.3551201","DOIUrl":"https://doi.org/10.1109/LSP.2025.3551201","url":null,"abstract":"Existing image dehazing methods are largely constrained to spatial domain processing, failing to fully leverage the rich knowledge embedded in the frequency domain of clear images. Additionally, the traditional convolutional operations in network architectures limit their mapping capabilities to some extent. To address these issues, a novel image dehazing network, termed the Detail and Content Enhancement Network (DCE-Net), is proposed. DCE-Net redefines dehazing task from a frequency-domain perspective, incorporating differential convolution and attention mechanisms to design the High-Frequency Detail Enhancement Module (HDEM) and the Low-Frequency Content Enhancement Module (LCEM). Furthermore, a Dual-Frequency Domain Knowledge-Guided Strategy (DDKS) is introduced during the training phase to exploit the abundant frequency-domain priors inherent in clear images. Experimental results demonstrate that the DCE-Net achieves outstanding performance on both synthetic benchmark datasets and real-world hazy scenes. DCE-Net not only significantly restores image clarity and contrast but also effectively preserves details and content features.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1356-1360"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Attention Context Model for Learned Image Compression
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-14 DOI: 10.1109/LSP.2025.3551659
Zhengxin Chen;Xiaohai He;Chao Ren;Tingrong Zhang;Shuhua Xiong
{"title":"Enhanced Attention Context Model for Learned Image Compression","authors":"Zhengxin Chen;Xiaohai He;Chao Ren;Tingrong Zhang;Shuhua Xiong","doi":"10.1109/LSP.2025.3551659","DOIUrl":"https://doi.org/10.1109/LSP.2025.3551659","url":null,"abstract":"Recently, deep learning has witnessed encouraging advances in image compression. An accurate entropy model, which estimates the probability distribution of the latent representation and reduces the bits required for compressing an image, is one of the keys to the success of learned image compression methods. The latent representation presents potential correlations in local, non-local, and cross-channel contexts. However, most entropy models only consider partial correlations, leading to suboptimal entropy estimation. In this letter, we propose a novel enhanced attention context model (EACM) to make full use of various correlations between latent elements for accurate entropy estimation. The proposed EACM contains a local spatial attention block (LSAB), a local channel attention block (LCAB), a global spatial attention block (GSAB), and a global channel attention block (GCAB). LSAB, LCAB, GSAB, and GCAB are carefully designed to adaptively exploit local spatial, local channel, global spatial, and global channel correlations, respectively. The experimental results on benchmark datasets show that our image compression model with the proposed EACM outperforms several state-of-the-art methods quantitatively and qualitatively.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1301-1305"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XSNet: A Lightweight X-Ray Security Image Segmentation Model Combining State-Space Models and Convolutional Neural Networks
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-14 DOI: 10.1109/LSP.2025.3550769
Weichao Jia;Wei Liu;Changsheng Zhang;Jian Fu;Qiong Liu
{"title":"XSNet: A Lightweight X-Ray Security Image Segmentation Model Combining State-Space Models and Convolutional Neural Networks","authors":"Weichao Jia;Wei Liu;Changsheng Zhang;Jian Fu;Qiong Liu","doi":"10.1109/LSP.2025.3550769","DOIUrl":"https://doi.org/10.1109/LSP.2025.3550769","url":null,"abstract":"In this letter, we propose a novel lightweight X-ray image contraband segmentation network, XSNet, which integrates State Space Models (SSM) with Convolutional Neural Networks (CNNs) to achieve a significant trade-off between segmentation accuracy and lightweight design for computer-aided X-ray security check. The model is built based on the encoder-decoder framework. Specifically, we design an Multi-scale Convolution Fusion (MCF) block for multi-scale information extraction and a Dual-branch State Space Model (DSSM) block to relieve the bias caused by the imbalance of single branch structure in feature extraction and maintain the capabilities of SSM in modeling long range pixel dependencies. In addition, we present two versions of the model in two different sizes called XSNet-s and XSNet-l respectively. The quantitative and qualitative evaluations on the public PIDray and PIXray datasets both show the superiority of two models in terms of mean Intersection over Union (mIoU) and FLOPs.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1351-1355"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143726418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algebraic Solution for Linear Array-Based 3D Localization Without Deployment Limitations
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-14 DOI: 10.1109/LSP.2025.3551200
Chengyu Li;Beichuan Tang;Yanbing Yang;Liangyin Chen;Yimao Sun
{"title":"Algebraic Solution for Linear Array-Based 3D Localization Without Deployment Limitations","authors":"Chengyu Li;Beichuan Tang;Yanbing Yang;Liangyin Chen;Yimao Sun","doi":"10.1109/LSP.2025.3551200","DOIUrl":"https://doi.org/10.1109/LSP.2025.3551200","url":null,"abstract":"Localizing a three-dimensional (3D) source using linear arrays (LAs) is a promising new localization technology. Existing solutions are either designed for specific LA deployments, are computationally intensive, or rely on iterative methods that do not guarantee convergence. This paper presents a novel algebraic solution algorithm for 3D source localization using space angle (SA) measurements from LAs. We propose a new formulation of the SA measurement equation, which leads to a constrained weighted least squares (CWLS) problem. Solving it by Lagrangian multipliers, the optimal estimation is obtained with an error correction. The solution does not require specific arrangement and placement of LAs and effectively balances accuracy with computational efficiency. We analyze the performance and complexity of the proposed solution, demonstrating its ability to achieve the Cramér-Rao Lower Bound (CRLB) in the small error region under Gaussian noise with a low computational load. Simulations validate the analysis and confirm the superiority of the proposed solution compared to existing ones.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1326-1330"},"PeriodicalIF":3.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Distributed Penalty-Like Function Approach for the Nonconvex Constrained Optimization Problem
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-13 DOI: 10.1109/LSP.2025.3551202
Xiasheng Shi;Darong Huang;Changyin Sun
{"title":"A Distributed Penalty-Like Function Approach for the Nonconvex Constrained Optimization Problem","authors":"Xiasheng Shi;Darong Huang;Changyin Sun","doi":"10.1109/LSP.2025.3551202","DOIUrl":"https://doi.org/10.1109/LSP.2025.3551202","url":null,"abstract":"This letter addresses distributed nonconvex constrained optimization problems, where both the local cost function and the inequality constraint function are nonconvex. Firstly, the global nonlinear equality constraint is added to the global cost function via a penalty-like function method. Then, based on the consensus technique of multiagent systems, the global nonlinear equality constraint is estimated through a distributed nonlinear consensus scheme within a finite time. Secondly, the local inequality constraint is managed with an adaptive penalty factor. Thirdly, the optimal outcome is attained by employing the gradient of the augmented Lagrangian function. The stability analysis is performed using the Lyapunov theory. Lastly, a simulation case on the economic dispatch problem in smart grids is presented to clarify the developed theoretical result.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1316-1320"},"PeriodicalIF":3.2,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Swin Transformer and Edge Spatial Attention for Remote Sensing Image Semantic Segmentation
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-03-12 DOI: 10.1109/LSP.2025.3550858
Fuxiang Liu;Zhiqiang Hu;Lei Li;Hanlu Li;Xinxin Liu
{"title":"Enhanced Swin Transformer and Edge Spatial Attention for Remote Sensing Image Semantic Segmentation","authors":"Fuxiang Liu;Zhiqiang Hu;Lei Li;Hanlu Li;Xinxin Liu","doi":"10.1109/LSP.2025.3550858","DOIUrl":"https://doi.org/10.1109/LSP.2025.3550858","url":null,"abstract":"Combining convolutional neural networks (CNNs) and transformers is a crucial direction in remote sensing image semantic segmentation. However, due to differences in the spatial information focus and feature extraction methods, existing feature transfer and fusion strategies do not effectively integrate the advantages of both approaches. To address these issues, we propose a CNN-transformer hybrid network for precise remote sensing image semantic segmentation. We propose a novel Swin Transformer block to optimize feature extraction and enable the model to handle remote sensing images of arbitrary sizes. Additionally, we design an Edge Spatial Attention module to focus attention on local edge structures, effectively integrating global features and local details. This facilitates efficient information flow between the Transformer encoder and CNN decoder. Finally, a multi-scale convolutional decoder is employed to fully leverage both global information from the Transformer and local features from the CNN, leading to accurate segmentation results. Our network achieved state-of-the-art performance on the Vaihingen and Potsdam datasets, reaching mIoU and F1 scores of 67.37% and 79.82%, as well as 72.39% and 83.68%, respectively.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1296-1300"},"PeriodicalIF":3.2,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信