{"title":"Quantized feature alignment for unsupervised domain adaptation in abdominal and prostate segmentation","authors":"Yang Wang, Xu Chen, Xiyu Zhang, Dongliang Liu","doi":"10.1016/j.dsp.2025.105580","DOIUrl":"10.1016/j.dsp.2025.105580","url":null,"abstract":"<div><div>Unsupervised Domain Adaptation (UDA) plays a crucial role in medical image segmentation, especially when annotated target domain data is unavailable. Traditional continuous feature alignment methods face challenges due to mini-batch limitations and often fail to capture the full domain distribution. To address these issues, we propose QFASeg-Net, a novel UDA strategy that utilizes quantized feature alignment instead of conventional continuous approaches. Our grouped quantization technique transforms continuous features into discrete representations, constructing a codebook that progressively learns to capture the entire feature distribution of the domain, rather than adapting to the limited distributions within mini-batches. Moreover, we incorporate multi-scale high-order statistical alignment to refine the alignment of quantized features across different domain spaces, enhancing cross-domain feature consistency. Experimental results on abdominal and prostate segmentation tasks demonstrate that QFASeg-Net outperforms existing methods, validating the effectiveness of quantized feature alignment for cross-modality medical image segmentation.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105580"},"PeriodicalIF":3.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145046127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FRFusion: Flare removal for nighttime infrared and visible image fusion","authors":"Hongli Wang, Wenhua Qian, Xue Wang, Chunlan Zhan, Cong Bi, Shuang Luo","doi":"10.1016/j.dsp.2025.105586","DOIUrl":"10.1016/j.dsp.2025.105586","url":null,"abstract":"<div><div>Infrared and visible image fusion (IVIF) aims to integrate the critical information captured by two different sensors effectively. However, most existing methods are designed for well-illuminated environments and often result in the loss of visible details under low-light conditions. Although some low-light enhancement methods for nighttime IVIF have been proposed, they generally incorporate illumination adjustment modules in a simplistic manner, focusing on enhancing intensity information while neglecting the influence of flare artifacts during the enhancement process. To address this limitation, we propose a fusion network called flare removal for nighttime infrared and visible image fusion (FRFusion), which generates flare masks to prevent the loss of complementary information. Specifically, we first design a lightweight multi-scale fusion block (MSFB). In this block, a depthwise separable convolution module (DSCM) combined with a dynamic feature modulation mechanism is employed for efficient local feature extraction. Subsequently, global feature refinement is achieved through an adaptive Fourier filter (AFF) based on the Fourier transform. Moreover, a pretrained auxiliary flare detector (AFD) is used to generate flare masks for constructing a flare-aware fusion loss, which guides the network to suppress flare interference in the fused results. Extensive experiments demonstrate that FRFusion outperforms state-of-the-art (SOTA) methods in both visual quality and quantitative evaluations. In particular, it shows remarkable effectiveness in flare suppression, delivering higher-quality information representation in the fused images.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105586"},"PeriodicalIF":3.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145010318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A transformer-based hybrid network for Alzheimer's disease diagnosis via MRI","authors":"Zhentao Hu , Yihan Wang , Yanyang Li","doi":"10.1016/j.dsp.2025.105562","DOIUrl":"10.1016/j.dsp.2025.105562","url":null,"abstract":"<div><div>Treating Alzheimer's disease (AD) is currently considered highly challenging among various neurodegenerative diseases. The precision of AD diagnosis can be confounded by multiple factors. Magnetic resonance imaging (MRI) is a critical tool for diagnosing AD. To assist physicians in clinical diagnosis, a new hybrid model, CTM-Net, is proposed based on MRI. CTM-Net incorporates a CNN enhanced by a channel attention mechanism to extract local fine-grained features from MRI slices, which are then mapped into high-level representations. Subsequently, the model integrates a Transformer's multi-head attention mechanism to capture long-range dependencies across MRI slices. The local continuity of features between two adjacent MRI slices is enhanced using a one-dimensional convolution operation, which gradually fuse spatially adjacent features to ultimately obtain global MRI information. CTM-Net was validated on the ADNI dataset. It achieved 92.70%, 83.00%, and 79.07% accuracy on the three classification tasks of AD vs. CN, AD vs. MCI, and MCI vs. CN, respectively. Compared to other models applied to AD classification tasks, the proposed model yielded superior results in terms of accuracy. CTM-Net is a Convolution-Transformer hybrid model for AD classification and diagnosis tasks, which can combine the advantages of the CNN and attention mechanism to make the most of interactive information between local lesion features and global context features for improving diagnosis efficiency.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105562"},"PeriodicalIF":3.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven end-to-end state estimation algorithm based on subspace identification","authors":"Yajing Cheng , Gang Hao","doi":"10.1016/j.dsp.2025.105554","DOIUrl":"10.1016/j.dsp.2025.105554","url":null,"abstract":"<div><div>A data-driven end-to-end state estimation algorithm for multi-input multi-output (MI-MO) high-dimensional linear systems is proposed in this paper. The proposed algorithm does not rely on any prior knowledge and instead utilizes measured input/output (I/O) data for state estimation. This algorithm is based on subspace identification technology and can handle state estimation of black box systems. The proposed algorithm consists of batch state estimation algorithm based on subspace (SI_BSE) and recursive state estimation algorithm based on subspace identification (SI_RSE). The efficacy of the proposed algorithms are verified through simulation.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105554"},"PeriodicalIF":3.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RMRDN: Recurrent multi-receptive residual dense network for image super-resolution","authors":"Inderjeet, Jyotindra Singh Sahambi","doi":"10.1016/j.dsp.2025.105556","DOIUrl":"10.1016/j.dsp.2025.105556","url":null,"abstract":"<div><div>Reconstructing fine textures and structures from low-resolution images remains a central challenge in super-resolution (SR). Existing CNN-based SR models often suffer from limited receptive fields, weak long-range dependency modeling, and insufficient use of hierarchical features. To address these limitations, we propose a Recurrent Multi-Receptive Residual Dense Network (RMRDN) comprising three novel modules: (1) a Recurrent Multi-Receptive Residual Dense Block (RMRDB) for capturing rich contextual information; (2) a Residual Dense LSTM (RDLSTM) for long-range dependency modeling; and (3) a Relevant Feature Booster Block (RFBB) for effective hierarchical feature utilization. Extensive experiments on five benchmark datasets demonstrate that RMRDN outperforms existing methods by producing sharper textures and more accurate structural details. For ×4 upscaling, our proposed model outperforms the second-best SR method, achieving gains of +0.10 dB on Set5, +0.11 dB on Set14, +0.13 dB on BSD100, +0.06 dB on Urban100, and +0.11 dB on Manga109, respectively.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105556"},"PeriodicalIF":3.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel unambiguous acquisition algorithm based on decomposition and reconstruction of sub-correlation functions for semi-integer CPM signals","authors":"Rui Xue, Mingming Xie","doi":"10.1016/j.dsp.2025.105564","DOIUrl":"10.1016/j.dsp.2025.105564","url":null,"abstract":"<div><div>Continuous phase modulation (CPM) with a semi-integer modulation index greater than 1 exhibits spectral splitting, superior tracking performance, and compatibility. However, the multiple side peaks in the autocorrelation function (ACF) of the semi-integer CPM signals introduce ambiguity threats in signal acquisition. Therefore, a novel unambiguous acquisition algorithm based on decomposition and reconstruction of sub-correlation functions (DRSCF) is proposed for semi-integer CPM signals. The algorithm further decomposes the first pulse amplitude modulation waveform after Laurent decomposition to obtain sub-signal waveforms suitable for CPM signals and reconstructs the unambiguous correlation function by a nonlinear combination of sub-correlation functions. Subsequently, energy loss compensation is performed using ACF. Theoretical analysis and simulation results show that the proposed DRSCF algorithm effectively eliminates the ambiguity threat in the of semi-integer CPM signals at the expense of some detection performance loss, and maintaining the narrow correlation peak.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105564"},"PeriodicalIF":3.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WEHD-DETR: A real-time defect detection algorithm for sewer pipelines based on improved RT-DETR","authors":"Guangchao Wei, Zhenzhong Yu, Dongjie Li","doi":"10.1016/j.dsp.2025.105585","DOIUrl":"10.1016/j.dsp.2025.105585","url":null,"abstract":"<div><div>Drainage pipe defects impose numerous negative impacts on society, the environment, and public safety. Excessive accumulation of sediment and obstructions in pipelines significantly reduces their water flow capacity, rendering urban areas highly susceptible to flooding during heavy rainfall and posing serious safety hazards. Additionally, structural defects such as misaligned joints and cracks can lead to groundwater leakage, potentially triggering geological disasters, including road collapses. Therefore, regular inspection of drainage pipelines is essential to ensure their proper functioning and to support urban safety and sustainable development. However, the accuracy and efficiency of current pipeline defect detection methods remain limited due to factors such as poor-quality early-stage images, insufficient data samples, complex internal pipeline backgrounds, and suboptimal lighting conditions. To address these issues, this study proposes a real-time pipeline defect detection method based on an improved RT-DETR algorithm. The method incorporates a lightweight backbone network and integrates an enhanced adaptive feature fusion module, dilated convolution, and structural reparameterization techniques, thereby improving the model's ability to extract and fuse pipeline defect information. Experimental results demonstrate that this method achieves efficient and accurate identification of five common types of pipeline defects. Compared to the original RT-DETR, the mean average precision (mAP) increases by 3.1%, the detection speed reaches 75.2 frames per second, and the model parameters are reduced by 34.6%. While maintaining high detection accuracy, the method significantly enhances detection efficiency and reduces computational resource consumption, making it suitable for real-time pipeline defect detection in complex environments.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105585"},"PeriodicalIF":3.0,"publicationDate":"2025-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144931911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interference mitigation methods for vehicular ISAC systems in dynamic environments","authors":"Zhenpeng Sun, Chen Miao, Yue Ma, Ruoyu Zhang, Wen Wu","doi":"10.1016/j.dsp.2025.105563","DOIUrl":"10.1016/j.dsp.2025.105563","url":null,"abstract":"<div><div>In the rapidly advancing domain of Advanced Driver Assistance Systems, Integrated Sensing and Communication (ISAC) technology stands out for its high-integration and cost-efficiency. Nonetheless, traditional ISAC interference avoidance methods require coordination of central nodes and a lot of information exchange, leading to reduced real-time decision-making and increased system complexity and maintenance costs. To address these challenges, we propose a no-regret learning algorithm featuring selectable utility functions. By integrating interference measurements into the utility function, each vehicle dynamically selects frequency bands in real time based on the measured interference level. The algorithm also balances frequency band allocation between communication and detection tasks by employing task-specific reward mechanisms. The proposed algorithm enables single-node frequency band selection, offering greater generalizability and lower complexity than conventional interference-avoidance methods. Moreover, we implement frequency-hopping signals to enhance interference mitigation and a time-domain wideband synthesis algorithm to improve detection accuracy and stability. Theoretical analysis and simulation indicate that, in high-density vehicular ISAC environments, our method enables vehicles to achieve both superior sensing and communication performance. When the SNR exceeds -39 dB, the bit error rate drops below <span><math><msup><mrow><mn>10</mn></mrow><mrow><mo>−</mo><mn>6</mn></mrow></msup></math></span>. We further analyze the allocation process and interference mitigation capability of different task vehicles to demonstrate the convergence and effectiveness of the algorithm. Finally, by varying the number of segments in the linear frequency-modulated signals, we show that appropriate segmentation not only enhances communication throughput but also improves radar detection accuracy and interference-mitigation capability.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105563"},"PeriodicalIF":3.0,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoqing Wan , Hui Liu , Feng Chen , Kun Hu , Zhize Li
{"title":"LFAH-Net: Laplacian frequency aware hierarchical network for hyperspectral image classification","authors":"Xiaoqing Wan , Hui Liu , Feng Chen , Kun Hu , Zhize Li","doi":"10.1016/j.dsp.2025.105561","DOIUrl":"10.1016/j.dsp.2025.105561","url":null,"abstract":"<div><div>In recent years, the combination of convolutional neural networks (CNNs) with transformers for spectral-spatial feature extraction and robust semantic modeling has greatly improved the performance in hyperspectral image (HSI) classification tasks. However, these methods often overlook frequency information; CNNs struggle to capture global dependencies due to limited receptive fields, and transformers tend to lose fine-grained local structures and high-frequency variations. To address these challenges, this paper proposes a Laplacian frequency aware hierarchical network (LFAH-Net). We first design the method employing a diversity frequency-aware transformer (DFAT) module alongside a multi-level frequency fusion block (MFFB) stack to explicitly separate and integrate high-frequency signals such as edges and textures, as well as low-frequency signals like spectral contours, thereby achieving cross-level frequency feature complementarity. Besides, we propose a spectral-spatial adaptive recalibration fusion (SSARF) module, specifically designed to correct misalignments and suppress noise in hyperspectral features. Finally, the multi-scale dilation convolution (MSDC) module utilizes dilated convolutions to capture both local and global contextual information, while the adaptive feature fusion (AFF) module adaptively recalibrates and fuses these features with the spectral representations from DFAT. Experimental results on four popular hyperspectral datasets demonstrate that our framework significantly outperforms several state-of-the-art methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105561"},"PeriodicalIF":3.0,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengfei Wang, Fan Shi, Xinbo Geng, Xu Cheng, Xinpeng Zhang
{"title":"Unsupervised saliency detection via multi-focus image reconstruction and prior-guided mask based on light field imaging","authors":"Pengfei Wang, Fan Shi, Xinbo Geng, Xu Cheng, Xinpeng Zhang","doi":"10.1016/j.dsp.2025.105541","DOIUrl":"10.1016/j.dsp.2025.105541","url":null,"abstract":"<div><div>Recently, light field data has garnered significant attention due to its immense potential in Unsupervised Salient Object Detection (USOD). However, these methods neglect the ability of the light field information itself to generate pseudo-labels. In this paper, we design a two-stage pseudo-label generation framework, based on the data structure of light field. In the first stage, we propose a proxy task called multi-focus image reconstruction (MFIR). It leverages light field information to generate a shallow depth-of-field image with the focus on the salient object, approximating the learning of saliency features. In the second stage, we introduce repair network and prior-guided mask (PGM) to guide pseudo-label updating by leveraging the stability of salient features in pre-trained weights, thereby addressing the depth ambiguity issue arising from MFIR. We name our framework light field refocus for saliency (LFR4S). Additionally, we use the generated pseudo-labels for supervised training and conduct comparative analysis on the results. Experimental results demonstrate that our method surpasses most existing USOD methods across multiple datasets. Finally, we design corresponding ablation studies to verify the necessity of certain modules.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105541"},"PeriodicalIF":3.0,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144925674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}