{"title":"GAWNet: A Gated Attention Wavelet Network for Respiratory Monitoring via Millimeter-Wave Radar","authors":"Yong Wang;Dongyu Liu;Chendong Xu;Bao Zhang;Yi Lu;Kuiying Yin;Shuai Yao;Qisong Wu","doi":"10.1109/LSP.2025.3611688","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611688","url":null,"abstract":"Millimeter-wave radar has attracted increasing attention for respiratory monitoring due to its non-contact operation and privacy-preserving characteristics. Nevertheless, extracting fine-grained respiratory waveforms from non-stationary radar signals remains highly challenging, as these signals are frequently contaminated by various interferences, most notably aperiodic body micromotion. The spectral components of such interference often overlap with the respiratory frequency band and typically exhibit power levels that significantly exceed the target signal. This letter introduces the Gated Attention Wavelet Network (GAWNet), an interpretable framework that integrates deep learning with physical priors by operating on radar phase information in the wavelet domain. GAWNet leverages a two-stage suppression strategy: first, a Temporal Gated Attention (TGA) encoder combines convolutional gating and self-attention to achieve initial interference reduction; second, a Frequency Gated Attention (FGA) decoder provides further refinement by transforming wavelet coefficients to the frequency domain for precise filtering. The clean respiratory waveform is then reconstructed using an Inverse Discrete Wavelet Transform (IDWT). Extensive experiments with data from 12 subjects demonstrate that GAWNet consistently outperforms state-of-the-art models and exhibits robust generalization capability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3695-3699"},"PeriodicalIF":3.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dataset Distillation for Super-Resolution Without Class Labels and Pre-Trained Models","authors":"Sunwoo Cho;Yejin Jung;Nam Ik Cho;Jae Woong Soh","doi":"10.1109/LSP.2025.3611694","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611694","url":null,"abstract":"Training deep neural networks has become increasingly demanding, requiring large datasets and significant computational resources, especially as model complexity advances. Data distillation methods, which aim to improve data efficiency, have emerged as promising solutions to this challenge. In the field of single image super-resolution (SISR), the reliance on large training datasets highlights the importance of these techniques. Recently, a generative adversarial network (GAN) inversion-based data distillation framework for SR was proposed, showing potential for better data utilization. However, the current method depends heavily on pre-trained SR networks and class-specific information, limiting its generalizability and applicability. To address these issues, we introduce a new data distillation approach for image SR that does not need class labels or pre-trained SR models. In particular, we first extract high-gradient patches and categorize images based on CLIP features, then fine-tune a diffusion model on the selected patches to learn their distribution and synthesize distilled training images. Experimental results show that our method achieves state-of-the-art performance while using significantly less training data and requiring less computational time. Specifically, when we train a baseline Transformer model for SR with only 0.68% of the original dataset, the performance drop is just 0.3 dB. In this case, diffusion model fine-tuning takes 4 hours, and SR model training completes within 1 h, much shorter than the 11-hour training time with the full dataset.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3700-3704"},"PeriodicalIF":3.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online Simplex-Structured Matrix Factorization","authors":"Hugues Kouakou;José Henrique de Morais Goulart;Raffaele Vitale;Thomas Oberlin;David Rousseau;Cyril Ruckebusch;Nicolas Dobigeon","doi":"10.1109/LSP.2025.3611695","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611695","url":null,"abstract":"Simplex-structured matrix factorization (SSMF) is a common task encountered in signal processing and machine learning. Minimum-volume constrained unmixing (MVCU) algorithms are among the most widely used methods to perform this task. While MVCU algorithms generally perform well in an offline setting, their direct application to online scenarios suffers from scalability limitations due to memory and computational demands. To overcome these limitations, this letter proposes an approach which can build upon any off-the-shelf MVCU algorithm to operate sequentially, i.e., to handle one observation at a time. The key idea of the proposed method consists in updating the solution of MVCU only when necessary, guided by an online check of the corresponding optimization problem constraints. It only stores and processes observations identified as informative with respect to the geometrical constraints underlying SSMF. We demonstrate the effectiveness of the approach when analyzing synthetic and real datasets, showing that it achieves estimation accuracy comparable to the offline MVCU method upon which it relies, while significantly reducing the computational cost.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3705-3709"},"PeriodicalIF":3.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multimodal Contrastive and Transfer Learning-Based Image Restoration Model for Multiple Adverse Weather Driving Scenes","authors":"Shi Yin;Hui Liu","doi":"10.1109/LSP.2025.3611705","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611705","url":null,"abstract":"Adverse weather conditions like rain, fog, and snow significantly hinder perception in autonomous driving systems. This paper proposes a multimodal contrastive learning and transfer learning-based adaptive image restoration method for multiple adverse weather conditions. By integrating image and textual information, our method enhances robustness to diverse weather scenarios. Specifically, we first fine-tune a contrastive language-image pre-trained model to develop a multimodal image classifier capable of recognizing adverse weather conditions. Subsequently, an encoder-decoder-based restoration network is employed, where cross-attention layers incorporate textual conditional information, enabling the network to perceive weather variations. An adaptive restoration strategy is then applied to target specific noise characteristics associated with different weather conditions. Experiments on Rain Cityscapes, Foggy Cityscapes, and Snow Cityscapes show our model outperforms task-specific and All-in-One methods in visual and real-time performance, providing an efficient and robust solution for autonomous driving in complex environments.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3745-3749"},"PeriodicalIF":3.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymptotic Classification Error for Heavy-Tailed Renewal Processes","authors":"Xinhui Rong;Victor Solo","doi":"10.1109/LSP.2025.3611709","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611709","url":null,"abstract":"Despite the widespread occurrence of classification problems and the increasing collection of point process data across many disciplines, study of error probability for point process classification only emerged very recently. Here, we consider classification of renewal processes. We obtain asymptotic expressions for the Bhattacharyya bound on misclassification error probabilities for heavy-tailed renewal processes.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3769-3773"},"PeriodicalIF":3.9,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinru Zhang;Zhenyu Ma;Jingyu Wang;Feiping Nie;Xuelong Li
{"title":"Outlier Resistant Fuzzy Clustering via Row Sparse Discriminative Embedding Projection","authors":"Xinru Zhang;Zhenyu Ma;Jingyu Wang;Feiping Nie;Xuelong Li","doi":"10.1109/LSP.2025.3611314","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611314","url":null,"abstract":"Fuzzy clustering and its derivatives have been widely applied for handling overlapping clusters throughprobabilistic membership assignment, yet their performance degrades under cumulative outlier interference. To cope with this limitation, we propose the Outlier Resistant Fuzzy Clustering via Row Sparse Discriminative Embedding Projection (RFCDE), which introduces an adaptive sample contribution vector to resist the outliers, a row-sparse membership refinement strategy to enhance normal sample attention, and a projection-guided prototype learning module to mitigate representation bias. Furthermore, a discriminative embedding objective is designed to effectively mitigate extraneous feature effects. These modules form a unified iterative architecture that improves clustering reliability in a low-dimensional framework. Comparative experiments on real-world datasets validate its broad applicability.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3735-3739"},"PeriodicalIF":3.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Sufficient Condition of Non-Convex ${ell }_{p} -beta {ell }_{q}$ Minimization for Sparse Recovery","authors":"Tao Pang;Geng-Hua Li","doi":"10.1109/LSP.2025.3611325","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611325","url":null,"abstract":"To recover sparse signals via <inline-formula><tex-math>${ell }_{p}-beta {ell }_{q}$</tex-math></inline-formula> minimization with parameters <inline-formula><tex-math>$0 < p leq 1$</tex-math></inline-formula>, <inline-formula><tex-math>$1 leq q leq 2$</tex-math></inline-formula> (<inline-formula><tex-math>$pne q$</tex-math></inline-formula>) and <inline-formula><tex-math>$0 leq beta leq 1$</tex-math></inline-formula>, this letter employs the Restricted Isometry Property (RIP) and Restricted Orthogonality Property (ROP) to investigate sparse signal recovery in the noise setting. Based on these analyses, a sufficient condition is proposed, which generalizes and improves state-of-the-art results. In Section <xref>III</xref>, several key remarks are presented. Our results also demonstrate that the derived condition outperforms the existing ones.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3690-3694"},"PeriodicalIF":3.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyuan Yang;Lu Leng;Andrew Beng Jin Teoh;Bob Zhang;Yi Zhang
{"title":"Beyond Static Features: A Novel Dynamic Palmprint Verification Framework Empowered by Generative Models","authors":"Ziyuan Yang;Lu Leng;Andrew Beng Jin Teoh;Bob Zhang;Yi Zhang","doi":"10.1109/LSP.2025.3611328","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611328","url":null,"abstract":"Palmprint recognition has received considerable attention due to its inherent discriminative characteristics. However, conventional methods largely rely on static features extracted from individual images, which limits their representational richness. To address this, we propose a dynamic palmprint verification framework that harnesses generative models to enhance feature representations through dynamic construction and matching strategies. During training, a classifier-guided generative model synthesizes class-aware pairs, and a regularization term is introduced to expand the feature space, while mitigating overfitting. For matching, we reformulate the process as a subspace projection within a locally adaptive feature space, where the original and class-conditioned generated features form the basis of the subspace. This enables the model to capture latent inter-individual relationships and achieve stronger discriminability. Extensive experiments across multiple backbones and public benchmarks validate the effectiveness and robustness of the proposed framework.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3740-3744"},"PeriodicalIF":3.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145210047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EDRC-NeRF: Enhanced Detail Recovery in Complex Lighting","authors":"Yuan Xie;Kai Lv;Jianping Cui;Liang Yuan","doi":"10.1109/LSP.2025.3611317","DOIUrl":"https://doi.org/10.1109/LSP.2025.3611317","url":null,"abstract":"Neural Radiation Field (NeRF) is a state-of-the-art 3D reconstruction paradigm that seamlessly combinesneural networks with efficient volumetric rendering. However, it has limitations in accurately modeling light transmission variations and capturing fine geometric details under complex lighting conditions, which poses significant challenges for detail restoration. To address these issues, We propose EDRC-NeRF, a novel extension of Aleth-NeRF that inherits its volumetric rendering framework and network architecture. EDRC-NeRF further enhances detail recovery and model generalization in complex lighting scenarios. EDRC-NeRF utilizes a truncated cone sampling technique to efficiently mitigate excessive blurring and aliasing artifacts. In addition, it dynamically captures multi-view features to improve viewpoint synthesis quality and employs a pruning strategy to enhance model generalization under different lighting conditions. Experimental evaluations on the LOM and ROF datasets show that EDRC-NeRF provides a significant improvement in the quality of detail reproduction, verifying its robustness and excellent performance under complex lighting conditions.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3789-3793"},"PeriodicalIF":3.9,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Simultaneous Sign Language Production: A Future-Context-Aware Approach","authors":"Biao Fu;Tong Sun;Xiaodong Shi;Yidong Chen","doi":"10.1109/LSP.2025.3610359","DOIUrl":"https://doi.org/10.1109/LSP.2025.3610359","url":null,"abstract":"Sign Language Production (SLP) has achieved promising progress in offline settings, where full input text is available before generation. However, such methods are unsuitable for real-time applications requiring low latency. In this work, we introduce Simultaneous Sign Language Production (SimulSLP), a new task that generates sign pose sequences incrementally from streaming text input. We first formalize the SimulSLP task and adapt the Average Token Delay metric to quantify latency. Then, we benchmark this task using three strong baselines from offline SLP—an end-to-end system and two cascaded pipelines with neural and dictionary-based Gloss-to-Pose modules—under a wait-<inline-formula><tex-math>$k$</tex-math></inline-formula> policy. However, all baselines suffer from a mismatch between full-sequence training and partial-input inference. To mitigate this, we propose a Future-Context-Aware Inference (FCAI) strategy. FCAI enhances partial input representations by predicting a small number of future tokens using a large language model. Before decoding, speculative features from the predicted tokens are discarded to ensure alignment with the observed input. Experiments on PHOENIX2014 T show that FCAI significantly improves the quality-latency trade-off, especially in low-latency settings, offering a promising step toward SimulSLP.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3764-3768"},"PeriodicalIF":3.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}