基于像素的CNN融合SAR与光学图像

IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
S. Bandi, M. Anbarasan, D. Sheela
{"title":"基于像素的CNN融合SAR与光学图像","authors":"S. Bandi, M. Anbarasan, D. Sheela","doi":"10.14311/nnw.2022.27.012","DOIUrl":null,"url":null,"abstract":"Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.","PeriodicalId":49765,"journal":{"name":"Neural Network World","volume":"1 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fusion of SAR and optical images using pixel-based CNN\",\"authors\":\"S. Bandi, M. Anbarasan, D. Sheela\",\"doi\":\"10.14311/nnw.2022.27.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.\",\"PeriodicalId\":49765,\"journal\":{\"name\":\"Neural Network World\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Network World\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.14311/nnw.2022.27.012\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Network World","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.14311/nnw.2022.27.012","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

不同波长的传感器在遥感场捕获数据。每个传感器都有自己的能力和局限性。合成孔径雷达(SAR)收集的数据具有很高的空间和辐射分辨率。光学遥感器捕获的图像具有良好的光谱信息。当采用更好的算法时,这些传感器融合的图像将具有更高的信息,从而产生正确的数据收集,以预测天气预报、土壤勘探和作物分类。这项工作包括使用基于深度学习的卷积神经网络(CNN)融合哨兵系列卫星的光学和雷达数据。图像融合方法的三方面工作在CNN中以分层架构的形式进行,在卷积层中覆盖图像变换,然后在最大池化层中进行活动水平测量。最后,在全连通层进行决策。这项工作的目的是表明所提出的基于深度学习的CNN融合方法克服了传统图像融合方法中的一些困难。为了展示基于cnn的图像融合的性能,分析了大量的图像质量评估指标。结果表明,在输出图像中,空间和光谱信息的融合在数值上是明显的,并且具有很高的鲁棒性。最后,客观评估结果优于最先进的融合方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fusion of SAR and optical images using pixel-based CNN
Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Network World
Neural Network World 工程技术-计算机:人工智能
CiteScore
1.80
自引率
0.00%
发文量
0
审稿时长
12 months
期刊介绍: Neural Network World is a bimonthly journal providing the latest developments in the field of informatics with attention mainly devoted to the problems of: brain science, theory and applications of neural networks (both artificial and natural), fuzzy-neural systems, methods and applications of evolutionary algorithms, methods of parallel and mass-parallel computing, problems of soft-computing, methods of artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信