RRCGAN:基于对比学习的遥感图像辐射分辨率无监督压缩

IF 8.6 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Tengda Zhang;Jiguang Dai;Jinsong Cheng;Hongzhou Li;Ruishan Zhao;Bing Zhang
{"title":"RRCGAN:基于对比学习的遥感图像辐射分辨率无监督压缩","authors":"Tengda Zhang;Jiguang Dai;Jinsong Cheng;Hongzhou Li;Ruishan Zhao;Bing Zhang","doi":"10.1109/TGRS.2025.3528052","DOIUrl":null,"url":null,"abstract":"The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at <uri>https://github.com/ZzzTD/RRCGAN</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-20"},"PeriodicalIF":8.6000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RRCGAN: Unsupervised Compression of Radiometric Resolution of Remote Sensing Images Using Contrastive Learning\",\"authors\":\"Tengda Zhang;Jiguang Dai;Jinsong Cheng;Hongzhou Li;Ruishan Zhao;Bing Zhang\",\"doi\":\"10.1109/TGRS.2025.3528052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at <uri>https://github.com/ZzzTD/RRCGAN</uri>.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":\"63 \",\"pages\":\"1-20\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10836865/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10836865/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

目前大多数遥感图像具有超过10位的高辐射分辨率。精确地将这种辐射分辨率压缩到8位对于可视化和随后的深度学习任务至关重要。以前,辐射分辨率压缩需要对传统的色调映射算子进行大量的参数调整。深度学习正在逐渐取代这种高度依赖人工的方法。然而,现有的深度学习色调映射技术主要是为数码相机捕获的自然场景图像设计的,这使得直接应用于遥感图像具有挑战性。这种限制源于数据格式的差异和遥感图像语义表示的复杂性。此外,深度学习模型固有的块预测往往会导致拼接后的平铺伪影,无法满足遥感图像的尺度依赖性。为了解决这些挑战,我们提出利用对比学习方法来压缩遥感图像的辐射分辨率。鉴于遥感图像中物体丰富的细节信息和复杂的空间分布,我们开发了一种既能捕获局部细节又能捕获远程依赖关系的CNN-Transformer混合发生器。在此基础上,引入非局部自相似对比损失和直方图相似损失来增强特征表达和调节图像颜色分布。此外,我们提出了一种基于混合直方图匹配(HHM)的后处理技术,以提高图像质量并无缝生成全场景图像。通过对我们的数据集进行实验和比较,我们的方法显示了优越的性能。数据集和代码可在https://github.com/ZzzTD/RRCGAN上在线获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
RRCGAN: Unsupervised Compression of Radiometric Resolution of Remote Sensing Images Using Contrastive Learning
The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at https://github.com/ZzzTD/RRCGAN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信