{"title":"RRCGAN:基于对比学习的遥感图像辐射分辨率无监督压缩","authors":"Tengda Zhang;Jiguang Dai;Jinsong Cheng;Hongzhou Li;Ruishan Zhao;Bing Zhang","doi":"10.1109/TGRS.2025.3528052","DOIUrl":null,"url":null,"abstract":"The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at <uri>https://github.com/ZzzTD/RRCGAN</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-20"},"PeriodicalIF":8.6000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RRCGAN: Unsupervised Compression of Radiometric Resolution of Remote Sensing Images Using Contrastive Learning\",\"authors\":\"Tengda Zhang;Jiguang Dai;Jinsong Cheng;Hongzhou Li;Ruishan Zhao;Bing Zhang\",\"doi\":\"10.1109/TGRS.2025.3528052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at <uri>https://github.com/ZzzTD/RRCGAN</uri>.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":\"63 \",\"pages\":\"1-20\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2025-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10836865/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10836865/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
RRCGAN: Unsupervised Compression of Radiometric Resolution of Remote Sensing Images Using Contrastive Learning
The majority of current remote sensing images possess high-radiometric resolution exceeding 10 bits. Precisely compressing this radiometric resolution to 8 bits is crucial for visualization and subsequent deep learning tasks. Previously, radiometric resolution compression required extensive parameter adjustments of traditional tone-mapping operators. Deep learning is gradually replacing this high manual dependency method. However, existing deep learning tone-mapping techniques are primarily designed for natural scene images captured by digital cameras, making direct application to remote sensing images challenging. This limitation stems from disparities in data formats and the complexity of semantic representation in remote sensing images. Moreover, the block prediction inherent in deep learning models often results in tiling artifacts postsplicing, failing to satisfy the scale dependency of remote sensing images. To tackle these challenges, we propose leveraging contrastive learning methods to compress the radiometric resolution of remote sensing images. Given the rich detail information and complex spatial distribution of objects in remote sensing images, we develop a CNN-Transformer hybrid generator capable of capturing both local details and long-range dependencies. Building upon this, we introduce nonlocal self-similarity contrastive loss and histogram similarity loss to enhance feature expression and regulate image color distribution. Additionally, we present a postprocessing technique based on hybrid histogram matching (HHM) to enhance image quality and seamlessly generate whole-scene images. Through experiments and comparisons on our dataset, our method demonstrates superior performance. The dataset and code can be obtained online at https://github.com/ZzzTD/RRCGAN.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.