Digital Signal Processing最新文献

筛选
英文 中文
MFNet: Multi-fusion network for medical image segmentation
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-04 DOI: 10.1016/j.dsp.2025.105219
Yugen Yi , Yi He , Hong Li , Xuan Wu , Jiangyan Dai , Siwei Luo , Quancai Li , Wei Zhou
{"title":"MFNet: Multi-fusion network for medical image segmentation","authors":"Yugen Yi ,&nbsp;Yi He ,&nbsp;Hong Li ,&nbsp;Xuan Wu ,&nbsp;Jiangyan Dai ,&nbsp;Siwei Luo ,&nbsp;Quancai Li ,&nbsp;Wei Zhou","doi":"10.1016/j.dsp.2025.105219","DOIUrl":"10.1016/j.dsp.2025.105219","url":null,"abstract":"<div><div>Medical image segmentation can distinguish and determine various structures, tissues, or lesions, providing crucial information for clinical diagnosis. However, it faces numerous challenges. On the one hand, medical images possess complex structures, diverse morphologies, uneven contrast, and blurred borders between target tissues and the background, all of which complicate the segmentation process. On the other hand, there exists semantic gaps between low-level and high-level features as well as between the encoder and decoder, which greatly impacts the segmentation effectiveness. In order to overcome these drawbacks, a <strong>M</strong>ulti-<strong>F</strong>usion <strong>Net</strong>work (<strong>MFNet</strong>) is presented to integrate semantic and feature fusion. In this method, two novel modules including Multi-Level Semantic Fusion (MLSF) module and Multi-Scale Progressive Fusion (MSPF) module are designed to heighten the representation capability of capturing diverse semantic and scale information. Moreover, a Multi-Stage Progressive Fusion Decoder (MSPFD) model is developed to substitute the traditional bottom-up aggregation decoder with a hierarchical fusion decoder to integrate features from different levels step by step. Meanwhile, an Interaction and Fusion of Adjacent Levels (IFAL) module is introduced to merge higher-level and lower-level features, effectively learning the semantic consistency and reducing this semantic gap. To evaluate the performance of our designed network, we evaluate it against several SOTA methods on four benchmark datasets including ISIC2018, GlaS, ACDC, and Synapse. Comparative results indicate that MFNet achieves remarkable ability on medical image segmentation.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105219"},"PeriodicalIF":2.9,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive radar recognition with Kolmogorov-Smirnov test and momentum gradient descent
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-03 DOI: 10.1016/j.dsp.2025.105212
Xiaoyuan Zhang , Shaohang Jing , Jingshu Li , Yechao Bai , Feng Yan
{"title":"Cognitive radar recognition with Kolmogorov-Smirnov test and momentum gradient descent","authors":"Xiaoyuan Zhang ,&nbsp;Shaohang Jing ,&nbsp;Jingshu Li ,&nbsp;Yechao Bai ,&nbsp;Feng Yan","doi":"10.1016/j.dsp.2025.105212","DOIUrl":"10.1016/j.dsp.2025.105212","url":null,"abstract":"<div><div>The emission parameters of cognitive radars can adaptively change according to the environment, which poses a challenge to radar electronic countermeasures (ECM). To counter cognitive radars, it is essential to identify the cognitive characteristics. In this paper, a method is proposed to recognize cognitive radars with power allocation function. The signal-to-interference-plus-noise ratio (SINR) distribution of cognitive radars is derived through feature functions, and hypothesis test is used to identify whether the target radar has cognitive function by designing a Kolmogorov-Smirnov (K-S) detector to recognize adaptive optimization power allocation. Subsequently, a momentum gradient descent algorithm is used to optimize the signal of the jamming machine to reduce type II error probability of radar recognition. K-S detector is simulated and compared with Afriat detector, SVM and MLP detector. Results demonstrate that the K-S detector outperforms both the Afriat and MLP detectors in identifying cognitive radars with dynamic power allocation functionality. At the same detection probability, the K-S detector achieves a 2 dB improvement over the MLP detector and a 4 dB improvement over the Afriat detector.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105212"},"PeriodicalIF":2.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractional lower-order covariance-based measures for cyclostationary time series with heavy-tailed distributions: Application to dependence testing and model order identification
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-03 DOI: 10.1016/j.dsp.2025.105214
Wojciech Żuławiński, Agnieszka Wyłomańska
{"title":"Fractional lower-order covariance-based measures for cyclostationary time series with heavy-tailed distributions: Application to dependence testing and model order identification","authors":"Wojciech Żuławiński,&nbsp;Agnieszka Wyłomańska","doi":"10.1016/j.dsp.2025.105214","DOIUrl":"10.1016/j.dsp.2025.105214","url":null,"abstract":"<div><div>This article introduces new methods for the analysis of cyclostationary time series with infinite variance. Traditional cyclostationary analysis, based on periodically correlated (PC) processes, relies on the autocovariance function (ACVF). However, the ACVF is not suitable for data exhibiting a heavy-tailed distribution, particularly with infinite variance. Thus, we propose a novel framework for the analysis of cyclostationary time series with heavy-tailed distribution, utilizing the fractional lower-order covariance (FLOC) as an alternative to covariance. This leads to the introduction of two new autodependence measures: the periodic fractional lower-order autocorrelation function (peFLOACF) and the periodic fractional lower-order partial autocorrelation function (peFLOPACF). These measures generalize the classical periodic autocorrelation function (peACF) and periodic partial autocorrelation function (pePACF), offering robust tools for analyzing infinite-variance processes. Two practical applications of the proposed measures are explored: a portmanteau test for testing dependence in cyclostationary series and a method for order identification in periodic autoregressive (PAR) and periodic moving average (PMA) models with infinite variance. Both applications demonstrate the potential of new tools, with simulations validating their efficiency. The methodology is further illustrated through the analysis of real-world air pollution data, which showcases its practical utility. The results indicate that the proposed measures based on FLOC provide reliable and efficient techniques for analyzing cyclostationary processes with heavy-tailed distributions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105214"},"PeriodicalIF":2.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slow FAMA under Nakagami-m fading channels
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-02 DOI: 10.1016/j.dsp.2025.105208
Paulo R. de Moura , Hugerles S. Silva , Ugo S. Dias , Higo T.P. Silva , Osamah S. Badarneh , Rausley A.A. de Souza
{"title":"Slow FAMA under Nakagami-m fading channels","authors":"Paulo R. de Moura ,&nbsp;Hugerles S. Silva ,&nbsp;Ugo S. Dias ,&nbsp;Higo T.P. Silva ,&nbsp;Osamah S. Badarneh ,&nbsp;Rausley A.A. de Souza","doi":"10.1016/j.dsp.2025.105208","DOIUrl":"10.1016/j.dsp.2025.105208","url":null,"abstract":"<div><div>This article investigates slow fluid antenna multiple access (FAMA) under the effect of Nakagami-<em>m</em> fading. Exact expressions for the outage probability (OP) based on signal-to-interference ratio (SIR) and signal-to-interference plus noise ratio (SINR) are presented. An upper bound for SIR-based OP and an approximate expression for the SNIR-based OP are derived using the Gauss-Laguerre quadrature approach. Bounds for the multiplexing gain are also deduced. In addition to showing that lower values of the fading parameter have a beneficial effect on the overall mean performance, this work also illustrates several important conclusions concerning the system performance as a function of system parameters. Monte Carlo simulations validate the exact and approximate expressions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105208"},"PeriodicalIF":2.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrared and visible image fusion based on text-image core-semantic alignment and interaction
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-02 DOI: 10.1016/j.dsp.2025.105203
Xuan Li , Jie Wang , Weiwei Chen , Rongfu Chen , Guomin Zhang , Li Cheng
{"title":"Infrared and visible image fusion based on text-image core-semantic alignment and interaction","authors":"Xuan Li ,&nbsp;Jie Wang ,&nbsp;Weiwei Chen ,&nbsp;Rongfu Chen ,&nbsp;Guomin Zhang ,&nbsp;Li Cheng","doi":"10.1016/j.dsp.2025.105203","DOIUrl":"10.1016/j.dsp.2025.105203","url":null,"abstract":"<div><div>The text prior can effectively compensate for the limitations of image modality in capturing semantic information, which makes the fusion process more semantic and contextual. However, current fusion methods are not sufficiently adaptive to flexible text inputs and lack the precise alignment between textual semantics and image local regions. To address these issues, an image fusion method based on the text-image core-semantic alignment and interaction is proposed to bridge the gap between cross-modal information. The text-image core-semantic alignment module is designed to refine the close adherence between text and object regions through a pixel-wise coarse-to-fine segmentation mechanism. Meanwhile, a synergistic fusion pipeline is devised to establish a link between a contextual feature extraction unit and a cross-modal affine fusion module. The pipeline directs local attention to text-adherent image regions, while coupling global text features to compensate for the contextual details of whole images. In this way, the fused images enhance the adherence to the flexible text and capture richer contextual details for a more comprehensive visual representation. Extensive experiments on several datasets demonstrate that the proposed text-guided fusion method has obvious advantages over state-of-the-art methods in fusion performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105203"},"PeriodicalIF":2.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phase transitions with structured sparsity
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-02 DOI: 10.1016/j.dsp.2025.105213
Huiguang Zhang, Baoguo Liu
{"title":"Phase transitions with structured sparsity","authors":"Huiguang Zhang,&nbsp;Baoguo Liu","doi":"10.1016/j.dsp.2025.105213","DOIUrl":"10.1016/j.dsp.2025.105213","url":null,"abstract":"<div><div>While phase transition phenomena in compressed sensing have been rigorously established for simple sparse signals by Donoho and others, the behavior of structured sparse signals—such as block or tree patterns common in real-world applications—remains theoretically underexplored. This paper addresses this critical gap by extending phase transition analysis to structured sparsity models through the geometric lens of high-dimensional convex polytope projections.</div><div>Our investigation reveals that weak thresholds, representing the proportion of faces lost after random projection, remain invariant across both simple and structured sparsity frameworks. In contrast, strong thresholds, which determine exact recovery guarantees, vary significantly according to structure type. We derive explicit mathematical expressions for these thresholds in both block-structured and tree-structured signals, demonstrating how additional structural constraints modify recovery boundaries. For block-sparse signals, we prove that the strong threshold rises as the number of blocks increases. Similarly, tree-sparse signals exhibit distinct threshold behaviors depending on whether sparsity falls below or exceeds the thresholds.</div><div>These findings provide theoretical justification for the empirical success of structured sparsity models in applications ranging from medical imaging to radar systems, where they consistently outperform traditional compressed sensing approaches.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105213"},"PeriodicalIF":2.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced UNet3+ model for accurate identification of COVID-19 in CT images
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-01 DOI: 10.1016/j.dsp.2025.105205
Hai Thanh Nguyen, Nhat Minh Nguyen, Thinh Quoc Huynh, Anh Kim Su
{"title":"An enhanced UNet3+ model for accurate identification of COVID-19 in CT images","authors":"Hai Thanh Nguyen,&nbsp;Nhat Minh Nguyen,&nbsp;Thinh Quoc Huynh,&nbsp;Anh Kim Su","doi":"10.1016/j.dsp.2025.105205","DOIUrl":"10.1016/j.dsp.2025.105205","url":null,"abstract":"<div><div>The COVID-19 pandemic has resulted in an out-of-control number of infections worldwide, causing severe and irreparable consequences. Computed Tomography scans show lung damage in patients. In this study, we propose using deep learning techniques, particularly image segmentation techniques on medical data, to facilitate the identification of affected areas, aiding medical professionals in detecting and screening this disease. This research is based on applying the UNet3+ architecture for image segmentation on CT lung images. Additionally, the integration of the UNet3+ architecture with SE-ResNeXt50 and ResNet50 has demonstrated the effectiveness of leveraging the strengths of these architectures together. The proposed methods are evaluated on a dataset that includes 373 out of the total of 829 slices from 9-axis computed tomography images evaluated by experienced radiologists. Experimental results show that the combination of UNet3+ and SE-ResNeXt50 is more effective for identifying COVID-19 infection, with a mean Intersection over Union value of 0.9290 and a mean Dice coefficient value of 0.9619. At the same time, the segmentation efficiency of COVID-19-infected regions achieved quite good results, with the Dice index reaching 0.9111 and IoU reaching 0.8367, which are promising for medical data segmentation and strong support for Healthcare.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105205"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of underwater small target sonar images using fast two-dimensional deconvolution
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-01 DOI: 10.1016/j.dsp.2025.105210
Bingru Li , Xudong Xu , Runze Zhang , Ouming Ye , Zhanhong Wan
{"title":"Improvement of underwater small target sonar images using fast two-dimensional deconvolution","authors":"Bingru Li ,&nbsp;Xudong Xu ,&nbsp;Runze Zhang ,&nbsp;Ouming Ye ,&nbsp;Zhanhong Wan","doi":"10.1016/j.dsp.2025.105210","DOIUrl":"10.1016/j.dsp.2025.105210","url":null,"abstract":"<div><div>The effective detection of underwater small targets largely depends on high-resolution sonar images. Traditional beamformers and matched filters are often used due to their simplicity. However, they have limited angular and range resolution and high sidelobes, which results in insufficient image resolution and image blurring, thus increasing the difficulty of detecting small targets effectively. To improve the sonar system's imaging performance, this paper proposes an underwater small target sonar image enhancement method based on spatiotemporal two-dimensional deconvolution. First, a hyperbolic frequency modulation signal is used as the transmission signal, and the sonar echoes from the far field are received by a uniform linear array. The two-dimensional raw sonar image with azimuth and range information is obtained through traditional sonar imaging methods. Then, spatiotemporal two-dimensional point spread functions are designed, and the modified Richardson-Lucy algorithm is applied along both the angular and range dimensions to deconvolve the raw sonar image. This results in a sonar image with a narrow main lobe and low sidelobes. Finally, simulations and design experiments verify the feasibility of this method. The simulation and experimental results demonstrate that this method can significantly improve target imaging performance, reducing the main lobe width by more than half and suppressing sidelobes to -60 dB. It also exhibits excellent performance in low signal-to-noise ratio (-10 dB) and multi-target scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105210"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UCT: Uncertainty-Based Consistency Training for Domain Adaptive Human Activity Recognition
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-04-01 DOI: 10.1016/j.dsp.2025.105209
Haiqi Hu , Chunyan She , Lidan Wang , Shukai Duan
{"title":"UCT: Uncertainty-Based Consistency Training for Domain Adaptive Human Activity Recognition","authors":"Haiqi Hu ,&nbsp;Chunyan She ,&nbsp;Lidan Wang ,&nbsp;Shukai Duan","doi":"10.1016/j.dsp.2025.105209","DOIUrl":"10.1016/j.dsp.2025.105209","url":null,"abstract":"<div><div>In the filed of mobile sensing and applied computing, sensor-based human activity recognition (HAR) constitutes a critical element. Labeling large datasets is typically resource-intensive and costly. Moreover, HAR models trained on one user show sub-optimal performance when generalized to other users due to data heterogeneity. This heterogeneity arises from variations in daily routines and environmental conditions among different data-gathering subjects. To achieve precise HAR with lower labeling costs, We propose a new approach to achieve domain adaptation using uncertainty-based consistency training (UCT). Aligning source and target domains is a significant challenge for traditional domain adaptation methods, due to the influence of domain-specific features, which hinders the extraction of domain-invariant features and leads to sub-optimal adaptation performance. In summary, we propose an uncertainty-based consistency training method. First, source and target domains are mapped to their respective intermediate domains through consistency training. Since the intermediate domain shares feature representations and contains minimal domain-specific features, this mapping reduces the influence of domain-specific features while also narrowing the discrepancy between the two domains. Subsequently, the discrepancies between source and target domains are mitigated through aligning them with an intermediate domain, thereby achieving more precise feature alignment. Additionally, to enable the model to learn multi-modal features from multiple sensors, we propose a multi-scale feature Attention network (MSFAN). Finally, we demonstrate that the domain adaptation performance of UCT significantly outperforms other methods by conducting extensive experiments on three public HAR datasets.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105209"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDATFuse: Sparse dual aggregation transformer-based network for infrared and visible image fusion
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-31 DOI: 10.1016/j.dsp.2025.105200
Jinshi Guo, Yang Li, Yutong Chen, Yu Ling
{"title":"SDATFuse: Sparse dual aggregation transformer-based network for infrared and visible image fusion","authors":"Jinshi Guo,&nbsp;Yang Li,&nbsp;Yutong Chen,&nbsp;Yu Ling","doi":"10.1016/j.dsp.2025.105200","DOIUrl":"10.1016/j.dsp.2025.105200","url":null,"abstract":"<div><div>Infrared and visible image fusion aims to integrate complementary thermal radiation and detailed information to enhance scene understanding. Transformer architectures have shown promising performance in this field, but their feed-forward networks struggle to model multi-scale features, and self-attention often aggregates features using the similarities of all tokens in the queries and keys, which leads to irrelevant tokens introducing noise. To address these issues, this paper proposes a Sparse Dual Aggregation Transformer-based network for Infrared and Visible Image Fusion (SDATFuse). First, a hybrid multi-scale feed-forward network (HMSF) is introduced to effectively model multi-scale information and extract cross-modal features. Next, a sparse spatial self-attention mechanism is developed, using dynamic top-k selection operator to filter key self-attention values. By applying sparse spatial self-attention and channel self-attention in consecutive Transformer blocks, SDATFuse constructs a dual aggregation structure that efficiently integrates inter-block features. Additionally, a Dynamic Interaction Module (DIM) aggregates intra-block features across different self-attention dimensions. Finally, in the fusion stage, a Dual Selective Attention Module (DSAM) dynamically selects weights for global and local features from both modalities, utilizing spatial and channel self-attention maps. The proposed SDATFuse demonstrates superior performance on multiple infrared and visible image datasets. Experiments show that SDATFuse's fused results outperform state-of-the-art models in both qualitative and quantitative evaluations, effectively reducing noise and preserving detailed information.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105200"},"PeriodicalIF":2.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信