IEEE Signal Processing Letters最新文献

筛选
英文 中文
A Convex Combination-Based Distributed Momentum Methods Over Directed Graphs 一种基于凸组合的有向图分布动量方法
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-23 DOI: 10.1109/LSP.2025.3563722
Siyuan Huang;Juan Gao;Qiao-Li Dong;Cuijie Zhang
{"title":"A Convex Combination-Based Distributed Momentum Methods Over Directed Graphs","authors":"Siyuan Huang;Juan Gao;Qiao-Li Dong;Cuijie Zhang","doi":"10.1109/LSP.2025.3563722","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563722","url":null,"abstract":"In this article, we introduce a convex combination-based distributed momentum method (CDM) for solving distributed optimization to minimize a sum of smooth and strongly convex local objective functions over directed graphs. The proposed method integrates the convex combination, row- and column-stochastic weights, and the adapt-then-combination rule. By selecting different parameters, it can be reduced to other distributed momentum methods, such as the parametric distributed momentum. CDM converges to the optimal solution at a global <italic>R-</i>linear rate for any smooth and strongly convex function when the step-size and momentum coefficient satisfy some bounded conditions. Numerical results for some distributed optimization problems demonstrate that CDM yields a performance that is superior to that of the state-of-the-art methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1835-1839"},"PeriodicalIF":3.2,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QELDBA: Query-Efficient and Low Distortion Black-Box Attack for Brainprint Recognition 基于查询效率和低失真的脑印识别黑盒攻击
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-22 DOI: 10.1109/LSP.2025.3563446
Jingsheng Qian;Hangjie Yi;Honggang Liu;Xuanyu Jin;Wanzeng Kong
{"title":"QELDBA: Query-Efficient and Low Distortion Black-Box Attack for Brainprint Recognition","authors":"Jingsheng Qian;Hangjie Yi;Honggang Liu;Xuanyu Jin;Wanzeng Kong","doi":"10.1109/LSP.2025.3563446","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563446","url":null,"abstract":"While various deep learning techniques for electroencephalogram (EEG)-based brainprint recognition have achieved considerable success, these models remain vulnerable to adversarial attacks. However, existing black-box attack methods suffer from an inherent trade-off between query efficiency and distortion level. To address this challenge and further investigate the security risks of brainprint recognition systems in real-world black-box scenarios, we propose a query-efficient, low-distortion black-box attack method that targets the high-frequency components of EEG signals. Our approach innovatively selects sparse sampling points to estimate more accurate gradient information and leverages historical gradients to guide the prioritization of important points, thereby accelerating the attack process. The perturbations are applied in the high-frequency domain of the EEG signal to enhance stealth and effectiveness. Extensive experiments under black-box settings demonstrate that our method achieves state-of-the-art performance across two datasets and four models. Compared to existing methods, our approach significantly improves attack success rates while reducing the number of queries and minimizing distortion to imperceptible levels, thus achieving a superior balance between query efficiency and perturbation stealth.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2020-2024"},"PeriodicalIF":3.2,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asynchronous Voice Anonymization by Learning From Speaker-Adversarial Speech 基于对抗性语音学习的异步语音匿名化
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-22 DOI: 10.1109/LSP.2025.3563306
Rui Wang;Liping Chen;Kong Aik Lee;Zhen-Hua Ling
{"title":"Asynchronous Voice Anonymization by Learning From Speaker-Adversarial Speech","authors":"Rui Wang;Liping Chen;Kong Aik Lee;Zhen-Hua Ling","doi":"10.1109/LSP.2025.3563306","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563306","url":null,"abstract":"This letter focuses on asynchronous voice anonymization, wherein machine-discernible speaker attributes in a speech utterance are obscured while human perception is preserved. We propose to transfer the voice-protection capability of speaker-adversarial speech to speaker embedding, thereby facilitating the modification of speaker embedding extracted from original speech to generate anonymized speech. Experiments conducted on the LibriSpeech dataset demonstrated that compared to the speaker-adversarial utterances, the generated anonymized speech demonstrates improved transferability and voice-protection capability. Furthermore, the proposed method enhances the human perception preservation capability of anonymized speech within the generative asynchronous voice anonymization framework.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1905-1909"},"PeriodicalIF":3.2,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Mamba Distillation for Low-Resolution Fine-Grained Image Classification 用于低分辨率细粒度图像分类的视觉曼巴蒸馏
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-22 DOI: 10.1109/LSP.2025.3563441
Yao Chen;Jiabao Wang;Peichao Wang;Rui Zhang;Yang Li
{"title":"Vision Mamba Distillation for Low-Resolution Fine-Grained Image Classification","authors":"Yao Chen;Jiabao Wang;Peichao Wang;Rui Zhang;Yang Li","doi":"10.1109/LSP.2025.3563441","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563441","url":null,"abstract":"Low-resolution fine-grained image classification has recently made significant progress, largely thanks to the super-resolution techniques and knowledge distillation methods. However, these approaches lead to an exponential increase in the number of parameters and computational complexity of models. In order to solve this problem, in this letter, we propose a Vision Mamba Distillation (ViMD) approach to enhance the effectiveness and efficiency of low-resolution fine-grained image classification. Concretely, a lightweight super-resolution vision Mamba classification network (SRVM-Net) is proposed to improve its capability for extracting visual features by redesigning the classification sub-network with Mamba modeling. Moreover, we design a novel multi-level Mamba knowledge distillation loss to boost the performance. The loss can transfer prior knowledge obtained from a High-resolution Vision Mamba classification Network (HRVM-Net) as a teacher into the proposed SRVM-Net as a student. Extensive experiments on seven public fine-grained classification datasets related to benchmarks confirm our ViMD achieves a new state-of-the-art performance. While having higher accuracy, ViMD outperforms similar methods with fewer parameters and FLOPs, which is more suitable for embedded device applications.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1965-1969"},"PeriodicalIF":3.2,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex Singular Spectrum Analysis Leveraging Adaptive Taper Windows for Enhancing Mode Reconstruction From Multivariate Signals 利用自适应锥窗增强多元信号模式重构的复奇异谱分析
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562823
Jialiang Gu;Kevin Hung;Bingo Wing-Kuen Ling;Daniel Hung-Kay Chow;Yang Zhou
{"title":"Complex Singular Spectrum Analysis Leveraging Adaptive Taper Windows for Enhancing Mode Reconstruction From Multivariate Signals","authors":"Jialiang Gu;Kevin Hung;Bingo Wing-Kuen Ling;Daniel Hung-Kay Chow;Yang Zhou","doi":"10.1109/LSP.2025.3562823","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562823","url":null,"abstract":"In this letter, a generic extension of complex singular spectrum analysis (CSSA), referred to as GC-SSA, is proposed to enhance mode reconstruction from multivariate signals. This is achieved by introducing adaptive taper windows for CSSA. Specifically, we formulate an optimization problem related to window design for specific multivariate signals, and then employ an iterative algorithm to optimize the coefficients of the taper windows. GC-SSA using optimized taper windows can decompose multivariate signals and perfectly reconstruct time-varying modes that have maximally concentrated energy. Numerical simulations were demonstrated to validate the effectiveness of the proposed method in mode reconstruction compared to other multivariate signal processing methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1820-1824"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASSMark: Dual Defense Against Speech Synthesis Attack via Adversarial Robust Watermarking 基于对抗性鲁棒水印的双重防御语音合成攻击
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562817
Yulin He;Hongxia Wang;Yiqin Qiu;Hao Cao
{"title":"ASSMark: Dual Defense Against Speech Synthesis Attack via Adversarial Robust Watermarking","authors":"Yulin He;Hongxia Wang;Yiqin Qiu;Hao Cao","doi":"10.1109/LSP.2025.3562817","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562817","url":null,"abstract":"Given the widespread dissemination of digital audio and the advancements in speech synthesis technologies, protecting audio copyright has become a critical issue. Although watermarks play an important role in copyright verification and forensic analysis, they are insufficient to proactively defend against malicious speech synthesis. To address this issue, we introduce a novel adversarial speech synthesis watermarking mechanism (ASSMark), which simultaneously traces the audio copyright and disrupts the speech synthesis models by embedding robust adversarial watermarks in a one-time manner. Specifically, we design a unified training framework that models the embedding of watermarks and adversarial perturbations as collaborative tasks. This approach allows for the fine-tuning of any robust watermark into an adversarial watermark, resulting in watermarked audio that can effectively defend against unauthorized speech synthesis attacks. Experimental results demonstrate that ASSMark achieves over 90% protection rate even to unknown black-box models. Compared to simplistic two-step protection methods, it not only effectively resists synthesis attacks but also achieves superior watermark extraction accuracy and speech quality, offering an outstanding solution for protecting audio copyright.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1870-1874"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moving Average-Based Variable Projection for Separable Nonlinear Problems 基于移动平均的可分离非线性问题变量投影
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3563133
Peng Xue;Min Gan;Fang Yuan;Guang-Yong Chen;C. L. Philip Chen
{"title":"Moving Average-Based Variable Projection for Separable Nonlinear Problems","authors":"Peng Xue;Min Gan;Fang Yuan;Guang-Yong Chen;C. L. Philip Chen","doi":"10.1109/LSP.2025.3563133","DOIUrl":"https://doi.org/10.1109/LSP.2025.3563133","url":null,"abstract":"The identification of separable nonlinear models, prevalent in tasks such as signal analysis, image processing, time series analysis, and machine learning, presents a non-convex optimization challenge that necessitates the development of efficient identification algorithms. The Variable Projection (VP) algorithm has been proven to be quite effective for addressing these problems; however, traditional VP relying on the Hessian matrix and its inverse are highly time-consuming and unsuitable for complex, large-scale applications. This letter introduces a novel approach that employs the exponential moving average of gradient and gradient estimation bias to indirectly estimate the curvature of the objective landscape, proposing a Moving Average-based Variable Projection method (MAVP). The proposed algorithm utilizes only gradient information and can properly tackle the coupling relationships between different parameters during the optimization process, thereby achieving faster convergence. Numerical results on nonlinear time series analysis and image reconstruction demonstrate that the MAVP algorithm exhibits significant efficiency and effectiveness.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1900-1904"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilevel Representation Disentanglement Framework for Multimodal Sentiment Analysis 多模态情感分析的多层表示解纠缠框架
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562827
Nan Jia;Zicong Bai;Tiancheng Xiong;Mingyang Guo
{"title":"Multilevel Representation Disentanglement Framework for Multimodal Sentiment Analysis","authors":"Nan Jia;Zicong Bai;Tiancheng Xiong;Mingyang Guo","doi":"10.1109/LSP.2025.3562827","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562827","url":null,"abstract":"Multimodal Sentiment Analysis (MSA) has gained wide attention in many fields in recent years. However, the problem of heterogeneity and redundant information among different signals seriously affects the extraction and fusion of sentiment features. To address this challenge, we propose a Multilevel Representational Disentanglement Framework (MRDF) to achieve effective modality fusion and produce refined joint multimodal representations. Specifically, we design a refined semantic decomposition module for learning task-shared representations and modality-exclusive representations by crossmodal translations and task semantic reconstruction. Furthermore, we propose a contrastive learning-based distribution alignment mechanism and an adversarial learning-based distribution alignment strategy to utilize contrastive adversarial learning paradigms to further align the disentangled task-shared representations Experimental results show that the MRDF framework significantly outperforms existing state-of-the-art methods on the MOSI and MOSEI benchmarks.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1895-1899"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond $R$-Barycenters: An Effective Averaging Method on Stiefel and Grassmann Manifolds 超越R -重心:Stiefel和Grassmann流形的一种有效的平均方法
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562735
Florent Bouchard;Nils Laurent;Salem Said;Nicolas Le Bihan
{"title":"Beyond $R$-Barycenters: An Effective Averaging Method on Stiefel and Grassmann Manifolds","authors":"Florent Bouchard;Nils Laurent;Salem Said;Nicolas Le Bihan","doi":"10.1109/LSP.2025.3562735","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562735","url":null,"abstract":"In this paper, the issue of averaging data on a manifold is addressed. While the Fréchet mean resulting from Riemannian geometry appears ideal, it is unfortunately not always available and often computationally very expensive. To overcome this, <inline-formula><tex-math>$R$</tex-math></inline-formula>-barycenters have been proposed and successfully applied to Stiefel and Grassmann manifolds. However, <inline-formula><tex-math>$R$</tex-math></inline-formula>-barycenters still suffer severe limitations as they rely on iterative algorithms and complicated operators. We propose simpler, yet efficient, barycenters that we call <inline-formula><tex-math>$RL$</tex-math></inline-formula>-barycenters. We show that, in the setting relevant to most applications, our framework yields astonishingly simple barycenters: arithmetic means projected onto the manifold. We apply this approach to the Stiefel and Grassmann manifolds. On simulated data, our approach is competitive with respect to existing averaging methods, while computationally cheaper.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1950-1954"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Tumor Edge Consistency in Multimodal MRI Synthesis for Improved Glioma Segmentation 增强多模态MRI合成中肿瘤边缘一致性以改善胶质瘤分割
IF 3.2 2区 工程技术
IEEE Signal Processing Letters Pub Date : 2025-04-21 DOI: 10.1109/LSP.2025.3562824
Can Chang;Li Yao;Xiaojie Zhao
{"title":"Enhancing Tumor Edge Consistency in Multimodal MRI Synthesis for Improved Glioma Segmentation","authors":"Can Chang;Li Yao;Xiaojie Zhao","doi":"10.1109/LSP.2025.3562824","DOIUrl":"https://doi.org/10.1109/LSP.2025.3562824","url":null,"abstract":"Precise segmentation of glioma subregions using multimodal MRI is crucial for accurate diagnosis and effective treatment. However, the absence of certain MRI modalities in clinical settings often leads to incomplete information, necessitating cross-modality synthesis to fill the gaps. A significant challenge in this synthesis is the blurring of tumor subregion boundaries, which affects subsequent segmentation accuracy. Existing methods, while improving boundary clarity, fail to ensure consistent depiction across different modalities due to varying contrasts and sensitive areas. To address these issues, we propose CSEC-Net, a novel tumor-aware synthesis model that enhances tumor edge consistency through Specific Contrast extraction and Edge Consistency enhancement. Our model employs a Contrast-Specific Prototype Learning (CS-PL) method to extract contrast-specific prototype features and an Edge Consistency Contrast Learning (EC-CL) method to improve tumor edge pixel sampling and feature learning. This innovative approach ensures consistent and clear tumor edge depiction across different modalities, significantly improving multimodal MRI synthesis and tumor segmentation accuracy.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"2060-2064"},"PeriodicalIF":3.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信