IEEE Signal Processing Letters最新文献

筛选
英文 中文
Moving Average-Based Variable Projection for Separable Nonlinear Problems 基于移动平均的可分离非线性问题变量投影
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3563133
Peng Xue;Min Gan;Fang Yuan;Guang-Yong Chen;C. L. Philip Chen
{"title":"Moving Average-Based Variable Projection for Separable Nonlinear Problems","authors":"Peng Xue;Min Gan;Fang Yuan;Guang-Yong Chen;C. L. Philip Chen","doi":"10.1109/LSP.2025.3563133","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563133","url":null,"abstract":"The identification of separable nonlinear models, prevalent in tasks such as signal analysis, image processing, time series analysis, and machine learning, presents a non-convex optimization challenge that necessitates the development of efficient identification algorithms. The Variable Projection (VP) algorithm has been proven to be quite effective for addressing these problems; however, traditional VP relying on the Hessian matrix and its inverse are highly time-consuming and unsuitable for complex, large-scale applications. This letter introduces a novel approach that employs the exponential moving average of gradient and gradient estimation bias to indirectly estimate the curvature of the objective landscape, proposing a Moving Average-based Variable Projection method (MAVP). The proposed algorithm utilizes only gradient information and can properly tackle the coupling relationships between different parameters during the optimization process, thereby achieving faster convergence. Numerical results on nonlinear time series analysis and image reconstruction demonstrate that the MAVP algorithm exhibits significant efficiency and effectiveness.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1900-1904"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilevel Representation Disentanglement Framework for Multimodal Sentiment Analysis 多模态情感分析的多层表示解纠缠框架
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562827
Nan Jia;Zicong Bai;Tiancheng Xiong;Mingyang Guo
{"title":"Multilevel Representation Disentanglement Framework for Multimodal Sentiment Analysis","authors":"Nan Jia;Zicong Bai;Tiancheng Xiong;Mingyang Guo","doi":"10.1109/LSP.2025.3562827","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562827","url":null,"abstract":"Multimodal Sentiment Analysis (MSA) has gained wide attention in many fields in recent years. However, the problem of heterogeneity and redundant information among different signals seriously affects the extraction and fusion of sentiment features. To address this challenge, we propose a Multilevel Representational Disentanglement Framework (MRDF) to achieve effective modality fusion and produce refined joint multimodal representations. Specifically, we design a refined semantic decomposition module for learning task-shared representations and modality-exclusive representations by crossmodal translations and task semantic reconstruction. Furthermore, we propose a contrastive learning-based distribution alignment mechanism and an adversarial learning-based distribution alignment strategy to utilize contrastive adversarial learning paradigms to further align the disentangled task-shared representations Experimental results show that the MRDF framework significantly outperforms existing state-of-the-art methods on the MOSI and MOSEI benchmarks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1895-1899"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond $R$-Barycenters: An Effective Averaging Method on Stiefel and Grassmann Manifolds 超越R -重心:Stiefel和Grassmann流形的一种有效的平均方法
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562735
Florent Bouchard;Nils Laurent;Salem Said;Nicolas Le Bihan
{"title":"Beyond $R$-Barycenters: An Effective Averaging Method on Stiefel and Grassmann Manifolds","authors":"Florent Bouchard;Nils Laurent;Salem Said;Nicolas Le Bihan","doi":"10.1109/LSP.2025.3562735","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562735","url":null,"abstract":"In this paper, the issue of averaging data on a manifold is addressed. While the Fréchet mean resulting from Riemannian geometry appears ideal, it is unfortunately not always available and often computationally very expensive. To overcome this, <inline-formula><tex-math>$R$</tex-math></inline-formula>-barycenters have been proposed and successfully applied to Stiefel and Grassmann manifolds. However, <inline-formula><tex-math>$R$</tex-math></inline-formula>-barycenters still suffer severe limitations as they rely on iterative algorithms and complicated operators. We propose simpler, yet efficient, barycenters that we call <inline-formula><tex-math>$RL$</tex-math></inline-formula>-barycenters. We show that, in the setting relevant to most applications, our framework yields astonishingly simple barycenters: arithmetic means projected onto the manifold. We apply this approach to the Stiefel and Grassmann manifolds. On simulated data, our approach is competitive with respect to existing averaging methods, while computationally cheaper.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1950-1954"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Tumor Edge Consistency in Multimodal MRI Synthesis for Improved Glioma Segmentation 增强多模态MRI合成中肿瘤边缘一致性以改善胶质瘤分割
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562824
Can Chang;Li Yao;Xiaojie Zhao
{"title":"Enhancing Tumor Edge Consistency in Multimodal MRI Synthesis for Improved Glioma Segmentation","authors":"Can Chang;Li Yao;Xiaojie Zhao","doi":"10.1109/LSP.2025.3562824","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562824","url":null,"abstract":"Precise segmentation of glioma subregions using multimodal MRI is crucial for accurate diagnosis and effective treatment. However, the absence of certain MRI modalities in clinical settings often leads to incomplete information, necessitating cross-modality synthesis to fill the gaps. A significant challenge in this synthesis is the blurring of tumor subregion boundaries, which affects subsequent segmentation accuracy. Existing methods, while improving boundary clarity, fail to ensure consistent depiction across different modalities due to varying contrasts and sensitive areas. To address these issues, we propose CSEC-Net, a novel tumor-aware synthesis model that enhances tumor edge consistency through Specific Contrast extraction and Edge Consistency enhancement. Our model employs a Contrast-Specific Prototype Learning (CS-PL) method to extract contrast-specific prototype features and an Edge Consistency Contrast Learning (EC-CL) method to improve tumor edge pixel sampling and feature learning. This innovative approach ensures consistent and clear tumor edge depiction across different modalities, significantly improving multimodal MRI synthesis and tumor segmentation accuracy.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2060-2064"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TF-CorrNet: Leveraging Spatial Correlation for Continuous Speech Separation tf - cornet:利用空间相关进行连续语音分离
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562819
Ui-Hyeop Shin;Bon Hyeok Ku;Hyung-Min Park
{"title":"TF-CorrNet: Leveraging Spatial Correlation for Continuous Speech Separation","authors":"Ui-Hyeop Shin;Bon Hyeok Ku;Hyung-Min Park","doi":"10.1109/LSP.2025.3562819","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562819","url":null,"abstract":"In general, multi-channel source separation has utilized inter-microphone phase differences (IPDs) concatenated with magnitude information in time-frequency domain, or real and imaginary components stacked along the channel axis. However, the spatial information of a sound source is fundamentally contained in the “differences” between microphones, specifically in the correlation between them, while the power of each microphone also provides valuable information about the source spectrum, which is why the magnitude is also included. Therefore, we propose a network that directly leverages a correlation input with phase transform (PHAT)-<inline-formula><tex-math>$beta$</tex-math></inline-formula> to estimate the separation filter. In addition, the proposed TF-CorrNet processes the features alternately across time and frequency axes as a dual-path strategy in terms of spatial information. Furthermore, we add a spectral module to model source-related direct time-frequency patterns for improved speech separation. Experimental results demonstrate that the proposed TF-CorrNet effectively separates the speech sounds, showing high performance with a low computational cost in the LibriCSS dataset.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1875-1879"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient RGBT Tracking via Multi-Path Mamba Fusion Network 基于多路径曼巴融合网络的高效rbt跟踪
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3563123
Fanghua Hong;Wanyu Wang;Andong Lu;Lei Liu;Qunjing Wang
{"title":"Efficient RGBT Tracking via Multi-Path Mamba Fusion Network","authors":"Fanghua Hong;Wanyu Wang;Andong Lu;Lei Liu;Qunjing Wang","doi":"10.1109/LSP.2025.3563123","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563123","url":null,"abstract":"RGBT tracking aims to fully exploit the complementary advantages of visible and infrared modalities to achieve robust tracking, thus the design of multimodal fusion network is crucial. However, existing methods typically adopt CNNs or Transformer networks to construct the fusion network, which poses a challenge in achieving a balance between performance and efficiency. To overcome this issue, we introduce an innovative visual state space (VSS) model, represented by Mamba, for RGBT tracking. In particular, we design a novel multi-path Mamba fusion network that achieves robust multimodal fusion capability while maintaining a linear overhead. First, we design a multi-path Mamba layer to sufficiently fuse two modalities in both global and local perspectives. Second, to alleviate the issue of inadequate VSS modeling in the channel dimension, we introduce a simple yet effective channel swapping layer. Extensive experiments conducted on four public RGBT tracking datasets demonstrate that our method surpasses existing state-of-the-art trackers. Notably, our fusion method achieves higher tracking performance compared to the well-known Transformer-based fusion approach (TBSI), while also achieving 92.8% and 80.5% reductions in parameter count and computational cost, respectively.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1790-1794"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143924953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Prediction of Hybrid Integration Detector for Radar Moderately Fluctuating Rayleigh Targets 雷达中波动瑞利目标混合积分探测器性能预测
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562829
Hongying Zheng;Qilei Zhang;Yongsheng Zhang
{"title":"Performance Prediction of Hybrid Integration Detector for Radar Moderately Fluctuating Rayleigh Targets","authors":"Hongying Zheng;Qilei Zhang;Yongsheng Zhang","doi":"10.1109/LSP.2025.3562829","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562829","url":null,"abstract":"In this letter, we address the performance prediction of the hybrid integration detector for radar moderately fluctuating Rayleigh targets in thermal noise. Initially, the moderately fluctuating Rayleigh target model is defined as a general form of well-known Swerling I and Swerling II models using an exponential correlation function. Based on this, an exact closed-form expression of detection probability for hybrid integration detector is derived. In extreme conditions (correlation coefficient is 1 or 0), the derived expression can degrade into the classical formulas, confirming it's validity. Finally, numerical examples are presented to verify the effectiveness of the derived theoretical model and to analyze the issue of optimal hybrid integration detector.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1920-1924"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Smoothing ADMM for Quantile Regression With Non-Convex Sparse Penalties 非凸稀疏惩罚分位数回归的分散平滑ADMM
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562828
Reza Mirzaeifard;Diyako Ghaderyan;Stefan Werner
{"title":"Decentralized Smoothing ADMM for Quantile Regression With Non-Convex Sparse Penalties","authors":"Reza Mirzaeifard;Diyako Ghaderyan;Stefan Werner","doi":"10.1109/LSP.2025.3562828","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562828","url":null,"abstract":"In the rapidly evolving internet-of-things (IoT) ecosystem, effective data analysis techniques are crucial for handling distributed data generated by sensors. Addressing the limitations of existing methods, such as the sub-gradient approach, which fails to distinguish between active and non-active coefficients effectively, this paper introduces the decentralized smoothing alternating direction method of multipliers (DSAD) for penalized quantile regression. Our method leverages non-convex sparse penalties like the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD), improving the identification and retention of significant predictors. DSAD incorporates a total variation norm within a smoothing ADMM framework, achieving consensus among distributed nodes and ensuring uniform model performance across disparate data sources. This approach overcomes traditional convergence challenges associated with non-convex penalties in decentralized settings. We present convergence proof and extensive simulation results to validate the effectiveness of the DSAD, demonstrating its superiority in achieving reliable convergence and enhancing estimation accuracy compared with prior methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1915-1919"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LYT-NET: Lightweight YUV Transformer-Based Network for Low-Light Image Enhancement LYT-NET:用于弱光图像增强的轻量级YUV变压器网络
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3563125
Alexandru Brateanu;Raul Balmez;Adrian Avram;Ciprian Orhei;Cosmin Ancuti
{"title":"LYT-NET: Lightweight YUV Transformer-Based Network for Low-Light Image Enhancement","authors":"Alexandru Brateanu;Raul Balmez;Adrian Avram;Ciprian Orhei;Cosmin Ancuti","doi":"10.1109/LSP.2025.3563125","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563125","url":null,"abstract":"This letter introduces LYT-Net, a novel lightweight transformer-based model for low-light image enhancement. LYT-Net consists of several layers and detachable blocks, including our novel blocks—Channel-Wise Denoiser (<bold>CWD</b>) and Multi-Stage Squeeze & Excite Fusion (<bold>MSEF</b>)—along with the traditional Transformer block, Multi-Headed Self-Attention (<bold>MHSA</b>). In our method we adopt a dual-path approach, treating chrominance channels <inline-formula><tex-math>$U$</tex-math></inline-formula> and <inline-formula><tex-math>$V$</tex-math></inline-formula> and luminance channel <inline-formula><tex-math>$Y$</tex-math></inline-formula> as separate entities to help the model better handle illumination adjustment and corruption restoration. Our comprehensive evaluation on established LLIE datasets demonstrates that, despite its low complexity, our model outperforms recent LLIE methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2065-2069"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Effective Yet Fast Early Stopping Metric for Deep Image Prior in Image Denoising 深度图像去噪中一种有效且快速的早期停止度量
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562948
Xiaohui Cheng;Shaoping Xu;Wuyong Tao
{"title":"An Effective Yet Fast Early Stopping Metric for Deep Image Prior in Image Denoising","authors":"Xiaohui Cheng;Shaoping Xu;Wuyong Tao","doi":"10.1109/LSP.2025.3562948","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562948","url":null,"abstract":"The deep image prior (DIP) and its variants have demonstrated the ability to address image denoising in an unsupervised manner using only a noisy image as training data, but practical limitations arise due to overfitting in highly overparameterized models and the lack of robustness in the fixed iteration step of early stopping, which fails to adapt to varying noise levels and image contents, thereby affecting denoising effectiveness. In this work, we propose an effective yet fast early stopping metric (ESM) to overcome these limitations when applying DIP models to process synthetic or real noisy images. Specifically, our ESM measures the image quality of the output images generated by the DIP network. We split the output image from each iteration into two sub-images and calculate their distance as an ESM to evaluate image quality. When the ESM stops decreasing over several iterations, we end the training, ensuring near-optimal performance without needing the ground-truth image, thus reducing computational costs and making ESM suitable for application in the denoising of real noisy images.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1925-1929"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信