Preliminary Investigation on Single Remote Sensing Image Inpainting Through a Modified GAN

S. Lou, Q. Fan, Feng Chen, Cheng Wang, Jonathan Li
{"title":"Preliminary Investigation on Single Remote Sensing Image Inpainting Through a Modified GAN","authors":"S. Lou, Q. Fan, Feng Chen, Cheng Wang, Jonathan Li","doi":"10.1109/PRRS.2018.8486163","DOIUrl":null,"url":null,"abstract":"Because of impacts resulted from sensor malfunction and clouds, there is usually a great deal of missing regions (pixels) in remotely sensed imagery. To make full use of the remotely sensed imagery affected, different algorithms for remote sensing images inpainting have been proposed. In this paper, an unsupervised convolutional neural network (CNN) context generate model was modified to recover the affected (or un-recorded) pixels in a single image without auxiliary information. Unlike existing nonparametric algorithms in which pixels located in surrounding region are used to estimate the unrecorded pixel, the proposed method directly generates content based on a neural network. To ensure recovered results with high quality, a modified reconstruction loss was used in training the model, which included structural similarity index (SSIM) loss and Ll loss. Comparison of the proposed model with bilinear interpolation was indicated through relative error. The performances of two methods in scenes with different complexity were discussed further. Results show that the proposed model performed better in simple scenes (i.e., with relative homogeneity), compared to the traditional method. Meanwhile, the corrupted images of channel blue were recovered more accurately, compared to the corrupted images of other channels (i.e., channel green and channel red). The relationship between scene complexities and channels shows that same scene has different complexities in different channels. The scene complexity presents significant correlation with recovered results, high complexity images are always accompanied by poor recovered results. It suggests that the recovering accuracy depends on scene complexity.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PRRS.2018.8486163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Because of impacts resulted from sensor malfunction and clouds, there is usually a great deal of missing regions (pixels) in remotely sensed imagery. To make full use of the remotely sensed imagery affected, different algorithms for remote sensing images inpainting have been proposed. In this paper, an unsupervised convolutional neural network (CNN) context generate model was modified to recover the affected (or un-recorded) pixels in a single image without auxiliary information. Unlike existing nonparametric algorithms in which pixels located in surrounding region are used to estimate the unrecorded pixel, the proposed method directly generates content based on a neural network. To ensure recovered results with high quality, a modified reconstruction loss was used in training the model, which included structural similarity index (SSIM) loss and Ll loss. Comparison of the proposed model with bilinear interpolation was indicated through relative error. The performances of two methods in scenes with different complexity were discussed further. Results show that the proposed model performed better in simple scenes (i.e., with relative homogeneity), compared to the traditional method. Meanwhile, the corrupted images of channel blue were recovered more accurately, compared to the corrupted images of other channels (i.e., channel green and channel red). The relationship between scene complexities and channels shows that same scene has different complexities in different channels. The scene complexity presents significant correlation with recovered results, high complexity images are always accompanied by poor recovered results. It suggests that the recovering accuracy depends on scene complexity.
基于改进GAN的单幅遥感图像涂装初探
由于传感器故障和云层的影响,在遥感图像中通常存在大量的缺失区域(像素)。为了充分利用受影响的遥感图像,人们提出了不同的遥感图像绘制算法。在本文中,对无监督卷积神经网络(CNN)上下文生成模型进行了修改,以在没有辅助信息的情况下恢复单幅图像中受影响(或未记录)的像素。与现有的非参数算法使用位于周围区域的像素来估计未记录像素不同,该方法基于神经网络直接生成内容。为了保证高质量的恢复结果,在训练模型时使用了一种改进的重建损失,包括结构相似指数(SSIM)损失和Ll损失。通过相对误差对该模型与双线性插值进行了比较。进一步讨论了两种方法在不同复杂场景下的性能。结果表明,与传统方法相比,该模型在简单场景(即相对均匀的场景)中表现更好。同时,与其他通道(即绿色通道和红色通道)的损坏图像相比,蓝色通道的损坏图像恢复更准确。场景复杂性与通道的关系表明,同一场景在不同的通道中具有不同的复杂性。场景复杂度与恢复结果存在显著的相关性,高复杂度的图像往往伴随着较差的恢复效果。结果表明,恢复精度与场景复杂度有关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信