Nasser Sadeghi , Masoumeh Azghani , Seyed Amir Mortazavi
{"title":"Joint pilot optimization and channel estimation using deep learning in massive MIMO systems","authors":"Nasser Sadeghi , Masoumeh Azghani , Seyed Amir Mortazavi","doi":"10.1016/j.dsp.2025.105287","DOIUrl":"10.1016/j.dsp.2025.105287","url":null,"abstract":"<div><div>In order to leverage the potential benefits of the massive Multiple-input multiple-output (MIMO) systems, it is crucial to have the accurate channel state information at the transmitter side (CSIT). This paper focuses on the joint pilot optimization and time varying channel estimation in multiuser massive MIMO systems using Deep Learning. The proposed method consists of two off line and on line stages. In the offline mode, a channel estimation network is trained and an offline pilot matrix is optimized. In the online mode, the joint pilot design and channel estimation is conducted using the deep learning scheme. A deep learning layer has been designed inspired by the sparse recovery schemes. The designed layer is used both in the pilot optimization network and in the online channel estimation and pilot optimization network. In the offline pilot optimization network, the inherent sparsity property of the channel has been exploited with the application of several designed layers. The proposed method is capable of tracking the channel variations over time with a reduced number of pilots. The performance of the proposed method has been evaluated in various simulation scenarios using two different channel models. The results confirm the superiority of the suggested scheme in offering a high precision channel estimation with much lower pilot overhead compared to its counterparts.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"165 ","pages":"Article 105287"},"PeriodicalIF":2.9,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143942637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ViDroneNet: An efficient detector specialized for target detection in aerial images","authors":"Haiyu Liao, Yaorui Tang, Yu Liu, Xiaohui Luo","doi":"10.1016/j.dsp.2025.105270","DOIUrl":"10.1016/j.dsp.2025.105270","url":null,"abstract":"<div><div>In recent years, the widespread use of Unmanned Aerial Vehicles (UAV) has made UAV target recognition particularly critical. However, images captured by UAV are characterized by non-uniform object distribution, multi-scale changes, complex backgrounds, and flexible viewpoints, which is a great challenge for general object detectors based on common convolutional networks. To address these issues, we propose ViDroneNet (Vison Drone Network), an efficient framework specifically designed for target detection by UAV. Firstly, to overcome the challenges posed by multi-scale target, we design the Multi-Head Self-Attention darknet (MHSA-darknet) module and applied it to the backbone network. Then, for the problem of small target aggregation, we add a specialized probe head to deepen the understanding of the detailed information of dense small targets. Finally, we designed a Channel-space deformable convolution module (CSDC) and a new approach to feature fusion, both improved sensitivity to spatially distributed inhomogeneous targets and enhanced model robustness. Experimental results show that ViDroneNet outperforms state-of-the-art methods on the VisDrone and UAVDT datases, which were compared to achieve the highest mAP.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105270"},"PeriodicalIF":2.9,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An optimal energy switching-based eavesdropping attacks for anti-user detection in networked multi-sensor fusion systems","authors":"Yue Li , Jiajia Li , Guoliang Wei","doi":"10.1016/j.dsp.2025.105285","DOIUrl":"10.1016/j.dsp.2025.105285","url":null,"abstract":"<div><div>This article discusses the optimal design of eavesdropping schemes in networked multi-sensor fusion systems (NMFSs) with energy constraints. Multiple sensors observe the state of the process and transmit the processed data to the remote user fusion center equipped with a detector via wireless channels, under an intelligent eavesdropper with dual attack capabilities of the passive monitoring and active jamming. To tackle the energy supply issue related to eavesdropping attacks in certain scenarios, the eavesdropper adopts energy switching scheduling. Therefore, this article aims at designing an attack strategy to improve the eavesdropping performance while reducing the user estimation performance under energy constraints. The eavesdropper firstly selects the jamming signal power based on the given threshold. Then, the initial problem is transformed into an unconstrained Markov decision process (MDP) by introducing Lagrange multipliers. Finally, sufficient conditions are provided to evade the user detection. The results indicate that the optimal eavesdropping strategy exhibits threshold-type structures. The above conclusion is supported by numerical examples.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105285"},"PeriodicalIF":2.9,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143906219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mainlobe multiple false targets deceptive jamming suppression via joint transmit-receive design","authors":"Yipin Liu , Lei Yu , Yinsheng Wei","doi":"10.1016/j.dsp.2025.105289","DOIUrl":"10.1016/j.dsp.2025.105289","url":null,"abstract":"<div><div>A method for suppressing mainlobe multiple false targets deceptive jamming using joint transmit-receive design is proposed in this paper. We utilize multiple-input multiple-output (MIMO) radar waveform design at the transmitting end and employ element-pulse coding (EPC) to modulate the received signal mixing matrix, thereby enhancing the blind separability between the target and jamming. The receiving end adopts blind source extraction (BSE) technology improved by blind source separation (BSS) for processing. By formulating an appropriate optimization model and employing the dynamic superior-inferior subgroup particle swarm optimizer algorithm, the optimal extraction vector for the target echo is obtained to achieve jamming suppression. This method combines the blind signal processing theory at the receiving end with the waveform design theory at the transmitting end, thereby utilizing complementary advantages. The numerical results demonstrate that the proposed method offers a spectrum of performance enhancements over conventional approaches, which are dependent exclusively on either the receiving or transmitting terminal.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105289"},"PeriodicalIF":2.9,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143912555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional denoising diffusion-based channel estimation for fast time-varying MIMO-OFDM systems","authors":"Heng Fu , Weijian Si , Ruizhi Liu","doi":"10.1016/j.dsp.2025.105283","DOIUrl":"10.1016/j.dsp.2025.105283","url":null,"abstract":"<div><div>We propose an innovative conditional denoising diffusion-based channel estimation (CDDCE) scheme for fast time-varying multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. This intelligent CDDCE model delicately adapts the denoising diffusion probabilistic model (DDPM) to conditional channel state information (CSI) generation and performs efficient channel estimation with a stochastic iterative denoising process. Specifically, the CDDCE model utilizes a Markov chain that gradually adds Gaussian noise to the customized preprocessed genuine CSI according to the cosine variance schedule for the forward Gaussian diffusion process. Then, the channel estimation begins with pure Gaussian noise and repeatedly refines the conditional roughly estimated CSI by a specialized U-Net trained on denoising at different noise levels for the reverse iterative refinement process. Numerical results show that our CDDCE scheme significantly outperforms classical approaches and three cutting-edge deep learning (DL)-based ones, indicating its eminent capability to learn the statistical characteristics of wireless channels. Besides, we demonstrate that the CDDCE scheme exhibits excellent robustness against various channel distortions and interference: when (i) there are a restricted number of pilot symbols, (ii) the cyclic prefix (CP) is omitted, (iii) the clipping noise is introduced, and (iv) the offline and online channel conditions are mismatched.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105283"},"PeriodicalIF":2.9,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconstructing signals from their blind compressed measurements through consistent extension of autocorrelation sequence","authors":"Veena Narayanan , G Abhilash","doi":"10.1016/j.dsp.2025.105262","DOIUrl":"10.1016/j.dsp.2025.105262","url":null,"abstract":"<div><div>The main challenge in blind compressive sensing is to uniquely reconstruct a sparse signal from its undersampled measurements without prior knowledge of the representing basis. This paper proposes a reconstruction algorithm that estimates a signal from its blind compressed measurements using a linear prediction method of autocorrelation sequence extension. The method extends the lower dimensional autocorrelation sequence of the blind compressed measurement vector to a higher dimensional autocorrelation sequence. The autocorrelation matrix associated with the extended autocorrelation sequence is symmetric and diagonalisable. The matrix that diagonalises the extended autocorrelation matrix exhibits performance close to the Karhunen-Loeve transform. Hence, it is identified as the matrix of sparsifying basis with respect to which the underlying signal exhibits sparsity. This matrix of sparsifying basis is utilised to retrieve the sparse set of representing coefficients using the orthogonal matching pursuit algorithm. The sparse signal is estimated maintaining consistency with the available measurements. The algorithm is formulated as a cascade of three lifting steps, namely, the autocorrelation extension, identification of the sparsifying transform, and the recovery and reconstruction of signals. The signals are reconstructed uniquely with the reconstruction error lower bounded to the order of <span><math><msup><mrow><mn>10</mn></mrow><mrow><mo>−</mo><mn>3</mn></mrow></msup></math></span>.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105262"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143892298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xue-Qin Jiang , Kai Huang , Shubo Zhou , Weiyu Hu , Huanchun Peng , Jiangliang Jin , Zhijun Fang
{"title":"Dual flow reverse distillation for unsupervised anomaly detection","authors":"Xue-Qin Jiang , Kai Huang , Shubo Zhou , Weiyu Hu , Huanchun Peng , Jiangliang Jin , Zhijun Fang","doi":"10.1016/j.dsp.2025.105258","DOIUrl":"10.1016/j.dsp.2025.105258","url":null,"abstract":"<div><div>Unsupervised image anomaly detection is crucial in industrial manufacturing due to the difficulty of collecting a diverse set of anomaly samples. Recently, reverse distillation-based methods, with a teacher encoder guiding a student decoder, have shown promising performance. However, existing methods generally focus on identifying only one type of anomaly, either structural anomalies or logical anomalies, and struggle to address both simultaneously. In this paper, we propose a novel dual flow reverse distillation model for anomaly detection, which separates the information flow into global context and local detail sub-flows. The global context sub-flow implemented by the Convolution and Self-Attention Integrated Bottleneck Embedding (ACBE) and the Global Context Embedding Block (GCEB), targets logical anomalies, while the local detail sub-flow implemented by the Multiscale Channel Autoencoder (MCAE), focuses on structural anomalies. Different decoding layers in the student network are then specifically designed to process these information flows, enabling the model to effectively address both types of anomalies. Extensive experiments validate the effectiveness of our method, demonstrating competitive performance on the MVTec and MVTec LOCO datasets, and achieving state-of-the-art results on the more challenging BTAD dataset.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105258"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chunlei Wu, Fengjiang Wu, Jie Wu, Leiquan Wang, Qinfu Xu
{"title":"Gradient-guided low-light image enhancement with spatial and frequency gradient restoration","authors":"Chunlei Wu, Fengjiang Wu, Jie Wu, Leiquan Wang, Qinfu Xu","doi":"10.1016/j.dsp.2025.105272","DOIUrl":"10.1016/j.dsp.2025.105272","url":null,"abstract":"<div><div>Low-light image enhancement aims to improve the quality of images captured in low-light scene by restoring lost details and color information. Current enhancement methods primarily rely on prior knowledge, such as illumination models and texture information. However, due to the degradation of prior information in low-light conditions, these methods often fail to effectively guide the restoration process, resulting in suboptimal detail reconstruction. To address these challenges, we propose a gradient prior restoration-based image enhancement (GPRIE) network that enhances low-light image through the optimization of gradient priors. The GPRIE comprises two key modules: the Gradient Restoration Block (GRB) and the Gradient-guided Calibration Block (GCB). The GRB recovers degraded gradient prior information by combining the spatial and frequency domains, while the GCB utilizes the gradient information to accurately correct image details, enhancing brightness while eliminating redundant information. We conducted extensive experiments on several public datasets, including LOL, LSRW, and MIT-Adobe FiveK. Our method outperforms previous state-of-the-art models by 0.15 dB in PSNR and 0.014 in SSIM in LSRW-Nikon dataset.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105272"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-branch residual sparse network with serial-parallel structure for image denoising","authors":"Zhen-Liang Yin , Xiang-Gui Guo , Li-Ying Hao","doi":"10.1016/j.dsp.2025.105267","DOIUrl":"10.1016/j.dsp.2025.105267","url":null,"abstract":"<div><div>This paper proposes a novel dual-branch residual sparse network (DRSNet) with serial-parallel structure for image denoising. In contrast to with the deep convolutional neural networks (CNNs) that only utilize the hierarchical features of noisy images, the proposed DRSNet has the advantages of depth and breadth search and attention-guided feature learning to obtain more comprehensive image feature information such as structural texture information and thus improve the model's denoising performance. The proposed DRSNet consists of two different branch sub-networks, i.e., residual sparse blocks (RSBs) and attention-guided residual sparse blocks (ARSBs), which enhance the denoising ability of the model by capturing complementary image feature information. Each of the sub-networks contains five sparse blocks and is connected by down-sampling and up-sampling operations to capture multi-scale information from local details to global context. It is worth mentioning that the proposed RSBs and ARSBs, which employ hybrid dilated convolution and residual connections can not only avoid the shortcomings of limited receptive field, large number of parameters and easy overfitting of standard convolution, but also solves the problem of low computational efficiency of dilated convolution, and realizes the balance of depth and breadth of the network. Experiments demonstrate that our proposed network model achieves excellent denoising performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105267"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DSOD-YOLO: A lightweight dual feature extraction method for small target detection","authors":"Yuan Nie , Huicheng Lai , Guxue Gao","doi":"10.1016/j.dsp.2025.105268","DOIUrl":"10.1016/j.dsp.2025.105268","url":null,"abstract":"<div><div>As object detection techniques advance, large-object detection has become less challenging. However, small-object detection remains a significant hurdle. DSOD-YOLO is a lightweight small-object detection network based on YOLOv8, designed to balance detection accuracy with model efficiency. To accurately detect small objects, the network employs a dual-backbone feature extraction architecture, which enhances the extraction of small-object details. This addresses the issue of detail loss in deep models. Additionally, a Channel-Scale Adaptive Module (FASD) is introduced to adaptively select feature channels and image sizes based on the required feature information. This helps mitigate the problem of sparse feature information and information loss during feature propagation for small objects. To strengthen contextual information and further improve small-object detection, a lightweight Context and Spatial Feature Calibration Network (CSFCN) is integrated. CSFCN performs context correction and spatial feature calibration through its two core modules, Context Feature Calibration (CFC) and Spatial Feature Calibration (SFC), based on pixel context similarity and channel dimensions, respectively. To reduce model complexity, the network undergoes a pruning process, achieving lightweight small-object detection. Furthermore, knowledge distillation is employed, with a large model acting as a teacher network to guide DSOD-YOLO, leading to further accuracy improvements. Experimental results demonstrate that DSOD-YOLO outperforms state-of-the-art algorithms like YOLOv9 and YOLOv10 on multiple small-object datasets. Additionally, a new small-object dataset (SmallDark) is created for low-light conditions, and the proposed method surpasses existing algorithms on this custom dataset.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"164 ","pages":"Article 105268"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}