{"title":"Multisensor Remote Sensing Imagery Super-Resolution with Conditional GAN","authors":"Junwei Wang, Kun Gao, Zhenzhou Zhang, Chong Ni, Zibo Hu, Dayu Chen, Qiong Wu","doi":"10.34133/2021/9829706","DOIUrl":null,"url":null,"abstract":"Despite the promising performance on benchmark datasets that deep convolutional neural networks have exhibited in single image super-resolution (SISR), there are two underlying limitations to existing methods. First, current supervised learning-based SISR methods for remote sensing satellite imagery do not use paired real sensor data, instead operating on simulated high-resolution (HR) and low-resolution (LR) image-pairs (typically HR images with their bicubic-degraded LR counterparts), which often yield poor performance on real-world LR images. Second, SISR is an ill-posed problem, and the super-resolved image from discriminatively trained networks with lp norm loss is an average of the infinite possible HR images, thus, always has low perceptual quality. Though this issue can be mitigated by generative adversarial network (GAN), it is still hard to search in the whole solution-space and find the best solution. In this paper, we focus on real-world application and introduce a new multisensor dataset for real-world remote sensing satellite imagery super-resolution. In addition, we propose a novel conditional GAN scheme for SISR task which can further reduce the solution-space. Therefore, the super-resolved images have not only high fidelity, but high perceptual quality as well. Extensive experiments demonstrate that networks trained on the introduced dataset can obtain better performances than those trained on simulated data. Additionally, the proposed conditional GAN scheme can achieve better perceptual quality while obtaining comparable fidelity over the state-of-the-art methods.","PeriodicalId":38304,"journal":{"name":"遥感学报","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"遥感学报","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.34133/2021/9829706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Despite the promising performance on benchmark datasets that deep convolutional neural networks have exhibited in single image super-resolution (SISR), there are two underlying limitations to existing methods. First, current supervised learning-based SISR methods for remote sensing satellite imagery do not use paired real sensor data, instead operating on simulated high-resolution (HR) and low-resolution (LR) image-pairs (typically HR images with their bicubic-degraded LR counterparts), which often yield poor performance on real-world LR images. Second, SISR is an ill-posed problem, and the super-resolved image from discriminatively trained networks with lp norm loss is an average of the infinite possible HR images, thus, always has low perceptual quality. Though this issue can be mitigated by generative adversarial network (GAN), it is still hard to search in the whole solution-space and find the best solution. In this paper, we focus on real-world application and introduce a new multisensor dataset for real-world remote sensing satellite imagery super-resolution. In addition, we propose a novel conditional GAN scheme for SISR task which can further reduce the solution-space. Therefore, the super-resolved images have not only high fidelity, but high perceptual quality as well. Extensive experiments demonstrate that networks trained on the introduced dataset can obtain better performances than those trained on simulated data. Additionally, the proposed conditional GAN scheme can achieve better perceptual quality while obtaining comparable fidelity over the state-of-the-art methods.