IEEE Signal Processing Letters最新文献

筛选
英文 中文
CKFNet: Neural Network Aided Cubature Kalman Filtering 神经网络辅助培养卡尔曼滤波
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599708
Jinhui Hu;Haiquan Zhao;Yi Peng
{"title":"CKFNet: Neural Network Aided Cubature Kalman Filtering","authors":"Jinhui Hu;Haiquan Zhao;Yi Peng","doi":"10.1109/LSP.2025.3599708","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599708","url":null,"abstract":"The cubature Kalman filter (CKF), while theoretically rigorous for nonlinear estimation, often suffers performance degradation due to model-environment mismatches in practice. To address this limitation, we propose CKFNet-a hybrid architecture that synergistically integrates recurrent neural networks (RNNs) with the CKF framework while preserving its cubature principles. Unlike conventional model-driven approaches, CKFNet embeds RNN modules in the prediction phase to dynamically adapt to unmodeled uncertainties, effectively reducing cumulative error propagation through temporal noise correlation learning. Crucially, the architecture maintains CKF’s analytical interpretability via constrained optimization of cubature point distributions. Numerical simulation experiments have confirmed that our proposed CKFNet exhibits superior accuracy and robustness compared to conventional model-based methods and existing KalmanNet algorithms.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3455-3459"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tuning-Free Online Robust Principal Component Analysis Through Implicit Regularization 基于隐式正则化的在线鲁棒主成分分析
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599784
Lakshmi Jayalal;Gokularam Muthukrishnan;Sheetal Kalyani
{"title":"Tuning-Free Online Robust Principal Component Analysis Through Implicit Regularization","authors":"Lakshmi Jayalal;Gokularam Muthukrishnan;Sheetal Kalyani","doi":"10.1109/LSP.2025.3599784","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599784","url":null,"abstract":"The performance of Online Robust Principal Component Analysis (OR-PCA) technique heavily depends on the optimum tuning of the explicit regularizers. This tuning is dataset-sensitive and often impractical to optimize in real-world scenarios. We aim to remove the dependency on these tuning parameters by using implicit regularization. To this end, we develop an approach that integrates implicit regularization properties of various gradient descent methods to estimate sparse outliers and low-dimensional representations in a streaming setting—a non-trivial extension of existing techniques. A key novelty lies in the design of a new parameterization for matrix estimation in OR-PCA. Our method incorporates three different versions of modified gradient descent that separate but naturally encourage sparsity and low-rank structures in the data. Experimental results on synthetic and real-world video datasets demonstrate that the proposed method, namely, Tuning-Free OR-PCA (TF-ORPCA), outperforms existing OR-PCA methods. TF-ORPCA makes it more scalable for large datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3360-3364"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCAN: Selective Contrastive Learning Against Noisy Data for Acoustic Anomaly Detection 扫描:声学异常检测中针对噪声数据的选择性对比学习
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599796
Zhaoyi Liu;Yuanbo Hou;Wenwu Wang;Sam Michiels;Danny Hughes
{"title":"SCAN: Selective Contrastive Learning Against Noisy Data for Acoustic Anomaly Detection","authors":"Zhaoyi Liu;Yuanbo Hou;Wenwu Wang;Sam Michiels;Danny Hughes","doi":"10.1109/LSP.2025.3599796","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599796","url":null,"abstract":"Acoustic Anomaly Detection (AAD) has gained significant attention for the detection of suspicious activities or faults. Contrastive learning-based unsupervised AAD has outperformed traditional models on academic datasets, however, its model training is predominantly based on datasets containing only normal samples. In real industrial settings, a dataset of normal samples can still be corrupted by abnormal samples. Handling such noisy data is a crucial challenge, yet it remains largely unsolved. To address this issue, this letter proposes a Selective Contrastive learning framework Against Noisy data (SCAN) to mitigate the adverse effects of training the AAD model with anomaly-corrupted data. Specifically, SCAN progressively constructs confidence sample pairs based on the Mahalanobis distance, which is derived from the geometric median. These selected pairs are then integrated into the contrastive learning framework to enhance representation learning and model robustness. Extensive experiments under varying levels of label noise (i.e., the proportion of mislabeled abnormal samples in training data) demonstrate that SCAN outperforms state-of-the-art (SOTA) AAD methods on the real-world industrial datasets DCASE2022 and DCASE2024 Task2.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3355-3359"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Man-Made Target Scattering Characterization and Recognition via Null-Pol Modulation Learning 基于零pol调制学习的人造目标散射表征与识别
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599791
Jie Deng;Wei Wang;Siwei Chen;Sinong Quan;Jun Zhang
{"title":"Man-Made Target Scattering Characterization and Recognition via Null-Pol Modulation Learning","authors":"Jie Deng;Wei Wang;Siwei Chen;Sinong Quan;Jun Zhang","doi":"10.1109/LSP.2025.3599791","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599791","url":null,"abstract":"Man-made targets subjected to different polarized waves will produce different depolarization effects, and these differences contain abundant information beneficial for recognition. However, traditional manually designed features struggle to fully utilize polarimetric information for scattering characterization. This letter proposes a target scattering characteristic learning network based on the Null-Pol response, which adaptively extracts the proportions of typical scattering mechanisms from mixed scattering mechanisms. Firstly, by leveraging polarimetric modulation, the Discrete Null-Pol Synthesis Pattern (DNSP) is designed to fully reveal the differences in target scattering mechanisms. On this basis, we propose an end-to-end scattering inversion network module to learn the DNSPs of different typical targets under scattering ambiguity conditions, obtaining polarimetric scattering contribution of 10 typical structures. Finally, we conduct structure recognition experiments to demonstrate the effectiveness of the proposed module. The results show that the proposed method can effectively characterize scattering behavior and significantly improve the performance of target structure recognition.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3335-3339"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L3FMamba: Low-Light Light Field Image Enhancement With Prior-Injected State Space Models L3FMamba:基于预先注入状态空间模型的弱光光场图像增强
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599733
Deyang Liu;Shizheng Li;Zeyu Xiao;Ping An;Caifeng Shan
{"title":"L3FMamba: Low-Light Light Field Image Enhancement With Prior-Injected State Space Models","authors":"Deyang Liu;Shizheng Li;Zeyu Xiao;Ping An;Caifeng Shan","doi":"10.1109/LSP.2025.3599733","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599733","url":null,"abstract":"In this letter, we address the problem of low-light light field (LF) image enhancement, where spatial details and angular coherence are severely degraded due to noise and insufficient illumination. Existing methods often rely on local aggregation or naive view stacking, which fail to capture global illumination and long-range spatial-angular correlations. To overcome these limitations, we propose L3FMamba, a lightweight enhancement method that integrates Retinex and Atmospheric Scattering models with dark, bright, and average channel priors for robust illumination decomposition. Moreover, we incorporate a state space model to capture non-local spatial-angular dependencies, enabling effective propagation of global context across views. By combining physics-inspired priors with structured modeling, L3FMamba achieves accurate illumination correction and fine-detail preservation with minimal parameters. Experiments show that L3FMamba outperforms the state-of-the-art in quality.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3270-3274"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Loss-Aware Data Augmentation With Dynamic Trimming and Weighting for Underwater Acoustic Target Classification 基于动态微调和加权的损失感知数据增强水声目标分类
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-18 DOI: 10.1109/LSP.2025.3599789
Mingmin Zeng;Xiangyang Zeng;Qing Huang;Da Zhang
{"title":"Loss-Aware Data Augmentation With Dynamic Trimming and Weighting for Underwater Acoustic Target Classification","authors":"Mingmin Zeng;Xiangyang Zeng;Qing Huang;Da Zhang","doi":"10.1109/LSP.2025.3599789","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599789","url":null,"abstract":"Limited availability of labeled data presents a significant challenge for underwater acoustic target recognition (UATR), often resulting in model overfitting and poor generalization. Data augmentation (DA) has been a major strategy to increase effective data diversity, yet prevailing methods often lack explicit mechanisms to discriminate the informational value of augmented samples. This letter presents two DA approaches, Loss-Aware Trimming Augmentation (LATA) and Learnable Weight-Based Augmentation (LWBA), to enhance the UATR task under restricted annotated data scenarios. LATA adaptively prunes both excessively difficult and trivial augmented samples based on real-time loss evaluation, while LWBA introduces sample-wise learnable weights to balance the influence of each augmentation during model training. Experiments conducted on the public DeepShip dataset validate the superiority of the proposed framework, with an average improvement of 3.44% in accuracy and 3.49% in F1-score compared to the baselines.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3295-3299"},"PeriodicalIF":3.9,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time Series Anomaly Detection for Natural Gas Pipeline Leakage 天然气管道泄漏的时间序列异常检测
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-14 DOI: 10.1109/LSP.2025.3599012
Xuguang Li;Zheng Dong;Haobin Zhang
{"title":"Time Series Anomaly Detection for Natural Gas Pipeline Leakage","authors":"Xuguang Li;Zheng Dong;Haobin Zhang","doi":"10.1109/LSP.2025.3599012","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599012","url":null,"abstract":"Natural gas pipelines play a crucial role in energy transportation, so accurate detection of leak anomalies is vital for safety. Supervisory Control and Data Acquisition (SCADA) systems are widely utilized in the pipeline industry and store extensive historical data with time series characteristics. In this paper, we present a masked Transformer detection model to address the issue of sparse leak labels in SCADA systems and overcome the limitations of neural networks in modeling long-time series. The model incorporates an encoder-only Transformer with a masked mechanism. We validated its effectiveness using real natural gas pipeline data, and the results showed that it can accurately identify pipeline leak anomalies. In particular, compared to other models, the masked Transformer model has shown an improvement in accuracy, recall, precision, and F1 score by 1.4%, 2.5%, 0.3%, and 1.4%, respectively, in real pipeline scenarios. Overall, the masked Transformer model excels in detecting anomalies in natural gas pipeline leakage.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3330-3334"},"PeriodicalIF":3.9,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Modulation Classification via Diffusion Transformers for Drone Video Signal Processing 在无人机视频信号处理中利用扩散变压器增强调制分类
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-14 DOI: 10.1109/LSP.2025.3599110
Insup Lee;Khalifa Alteneiji;Mohammed Alghfeli
{"title":"Enhancing Modulation Classification via Diffusion Transformers for Drone Video Signal Processing","authors":"Insup Lee;Khalifa Alteneiji;Mohammed Alghfeli","doi":"10.1109/LSP.2025.3599110","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599110","url":null,"abstract":"Reliable drone video signal processing depends on precise identification of modulation type to ensure effective demodulation. Automatic modulation classification (AMC) plays a key role in this process by extracting meaningful features from complex I/Q data. Although deep learning-based approaches have advanced AMC, two challenges still remain: (i) limited support for drone-relevant modulation types and (ii) the need for stable, high-quality generative models for robust data augmentation. This letter proposes the adoption of diffusion transformers (DiT), which capture intricate signal characteristics in diverse drone communication scenarios, including long-range communications, mobile drone networks, and high data rate video transmission. Experimental results demonstrate that DiT improves both the accuracy and robustness of AMC in drone video signal processing scenarios.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3325-3329"},"PeriodicalIF":3.9,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relearning a Downsampled Low-Resolution Representation for Image Super-Resolution 图像超分辨率下采样低分辨率表示的再学习
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-14 DOI: 10.1109/LSP.2025.3599109
Zhuang Zhou
{"title":"Relearning a Downsampled Low-Resolution Representation for Image Super-Resolution","authors":"Zhuang Zhou","doi":"10.1109/LSP.2025.3599109","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599109","url":null,"abstract":"Super-Resolution (SR)-based image transmission systems typically employ a framework where the sender transmits the downsampled Low-Resolution (LR) image to save transmission bandwidth, and then the receiver runs an SR module to super-resolve it to its original resolution. While existing work primarily focuses on exploiting various SR modules for improving the super-resolved images, the performance of each SR module inevitably encounters its own upper limit imposed by the module itself. In this letter, we propose a novel relearning method to overcome this limitation. Specifically, we investigate an adversarial relearning network that, upon receiving a downsampled LR image, generates its new adversarial LR representation that is better adapted to the super-resolving pipeline of the SR module. This adaptation enhances the suitability of the LR representation for the given SR module, creating a perceptual effect of surpassing the SR module’s performance ceiling, ultimately leading to higher-quality super-resolved images. We further introduce a cycle-consistency loss to guide the adversarial relearning process based solely on the LR image itself, without requiring any ground-truth supervision, since the original-resolution image is unavailable in the receiver. Extensive experiments validate the performance of the proposed method in terms of quantitative PSNR/SSIM/LPIPS scores and visual effects of the super-resolved images.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3385-3389"},"PeriodicalIF":3.9,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MOS-GAN: Mean Opinion Score GAN for Unsupervised Speech Enhancement MOS-GAN:用于无监督语音增强的平均意见评分GAN
IF 3.9 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-08-14 DOI: 10.1109/LSP.2025.3599453
Wenbin Jiang;Fei Wen;Kai Yu
{"title":"MOS-GAN: Mean Opinion Score GAN for Unsupervised Speech Enhancement","authors":"Wenbin Jiang;Fei Wen;Kai Yu","doi":"10.1109/LSP.2025.3599453","DOIUrl":"https://doi.org/10.1109/LSP.2025.3599453","url":null,"abstract":"Deep learning-based speech enhancement methods are predominantly trained in a supervised manner, relying on synthesized paired noisy-to-clean data. However, acquiring clean speech in real-world scenarios is often difficult or even impractical. To overcome this limitation, we propose a novel unsupervised learning framework for speech enhancement that relies solely on observed noisy speech, called MOS-GAN. Specifically, we leverage generative adversarial networks (GANs), where the generator (the enhancement model) is optimized to maximize the mean opinion score (MOS) guided by a discriminator, while the discriminator (a non-intrusive speech quality metric model) is optimized to predict MOS. However, without using reference clean speech, directly training of MOS-GAN is unstable and cannot achieve satisfactory performance. To address this issue, we further incorporate an unsupervised prior loss to substantially enhance training performance. Experimental results on benchmarks demonstrate that the proposed method, which requires neither clean data nor teacher models, performs on par with leading self-supervised and unsupervised approaches.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3465-3469"},"PeriodicalIF":3.9,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信