SAR- cdcfrn:一种基于相关双通道残差网络的SAR去噪方法

IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Anirban Saha, Arihant K.R., Suman Kumar Maji
{"title":"SAR- cdcfrn:一种基于相关双通道残差网络的SAR去噪方法","authors":"Anirban Saha,&nbsp;Arihant K.R.,&nbsp;Suman Kumar Maji","doi":"10.1016/j.image.2025.117267","DOIUrl":null,"url":null,"abstract":"<div><div>As a result of the increasing need for capturing and processing visual data of the Earth’s surface, Synthetic Aperture Radar (SAR) technology has been widely embraced by all space research organisations. The primary drawback in the acquired SAR visuals (images) is the presence of unwanted granular noise, called “speckle”, which poses a limitation to their processing and analysis. Therefore removing this unwanted speckle noise from the captured SAR visuals, a process known as despeckling, becomes an important task. This article introduces a new despeckling residual network named SAR-CDCFRN. This network simultaneously extracts speckle components from both the spatial and inverse spatial channels. The extracted features are then correlated by a dual-layer attention block and further processed to predict the distribution of speckle in the input noisy image. The predicted distribution, which is the residual noise, is then mapped with the input noisy SAR data to generate a despeckled output image. Experimental results confirm the superiority of the proposed despeckling model over other existing technologies in the literature.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"133 ","pages":"Article 117267"},"PeriodicalIF":2.7000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAR-CDCFRN: A novel SAR despeckling approach utilizing correlated dual channel feature-based residual network\",\"authors\":\"Anirban Saha,&nbsp;Arihant K.R.,&nbsp;Suman Kumar Maji\",\"doi\":\"10.1016/j.image.2025.117267\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As a result of the increasing need for capturing and processing visual data of the Earth’s surface, Synthetic Aperture Radar (SAR) technology has been widely embraced by all space research organisations. The primary drawback in the acquired SAR visuals (images) is the presence of unwanted granular noise, called “speckle”, which poses a limitation to their processing and analysis. Therefore removing this unwanted speckle noise from the captured SAR visuals, a process known as despeckling, becomes an important task. This article introduces a new despeckling residual network named SAR-CDCFRN. This network simultaneously extracts speckle components from both the spatial and inverse spatial channels. The extracted features are then correlated by a dual-layer attention block and further processed to predict the distribution of speckle in the input noisy image. The predicted distribution, which is the residual noise, is then mapped with the input noisy SAR data to generate a despeckled output image. Experimental results confirm the superiority of the proposed despeckling model over other existing technologies in the literature.</div></div>\",\"PeriodicalId\":49521,\"journal\":{\"name\":\"Signal Processing-Image Communication\",\"volume\":\"133 \",\"pages\":\"Article 117267\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-01-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing-Image Communication\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0923596525000141\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596525000141","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

由于获取和处理地球表面视觉数据的需求日益增长,合成孔径雷达(SAR)技术已被所有空间研究机构广泛采用。获得的SAR图像的主要缺点是存在不需要的颗粒噪声,称为“斑点”,这对它们的处理和分析构成限制。因此,从捕获的SAR图像中去除这些不需要的斑点噪声,这一过程称为去斑,成为一项重要的任务。本文介绍了一种新的消噪残差网络SAR-CDCFRN。该网络同时从空间通道和逆空间通道提取散斑分量。然后将提取的特征通过双层注意块进行关联,并进一步处理以预测输入噪声图像中的散斑分布。预测的分布,即残余噪声,然后与输入的噪声SAR数据进行映射,以生成去斑点的输出图像。实验结果证实了所提出的去斑模型优于文献中其他现有技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SAR-CDCFRN: A novel SAR despeckling approach utilizing correlated dual channel feature-based residual network
As a result of the increasing need for capturing and processing visual data of the Earth’s surface, Synthetic Aperture Radar (SAR) technology has been widely embraced by all space research organisations. The primary drawback in the acquired SAR visuals (images) is the presence of unwanted granular noise, called “speckle”, which poses a limitation to their processing and analysis. Therefore removing this unwanted speckle noise from the captured SAR visuals, a process known as despeckling, becomes an important task. This article introduces a new despeckling residual network named SAR-CDCFRN. This network simultaneously extracts speckle components from both the spatial and inverse spatial channels. The extracted features are then correlated by a dual-layer attention block and further processed to predict the distribution of speckle in the input noisy image. The predicted distribution, which is the residual noise, is then mapped with the input noisy SAR data to generate a despeckled output image. Experimental results confirm the superiority of the proposed despeckling model over other existing technologies in the literature.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信