Fused LISS IV Image Classification using Deep Convolution Neural Networks

K. Maheswari, S. Rajesh
{"title":"Fused LISS IV Image Classification using Deep Convolution Neural Networks","authors":"K. Maheswari, S. Rajesh","doi":"10.15837/ijccc.2022.5.4521","DOIUrl":null,"url":null,"abstract":"These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively.","PeriodicalId":179619,"journal":{"name":"Int. J. Comput. Commun. Control","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Commun. Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15837/ijccc.2022.5.4521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively.
基于深度卷积神经网络的LISS IV图像分类
目前,对地观测框架提供了大量异构遥感信息。利用其互易性来监督这种虚情大义的最有效方法是当前遥感调查中的一项重要考验。考虑到光学高空间分辨率(VHSR)图像,卫星在不同的空间目标上获取多光谱(MS)和全色(PAN)图像。信息融合程序通过提出一种技术来巩固各种信息传感器之间的互易性来解决这个问题。利用卷积神经网络(CNN)的深度学习技术对遥感图像进行分类,由于有希望的结果,正在增加坚实的体面基础。基于cnn的策略最显著的特点是不需要提前提取元素,这就产生了很大的推测能力。在本文中,我们提出了一种新的基于深度学习的SMDTR-CNN(相同模型不同训练轮与卷积神经网络)方法,用于在图像融合旁边对融合(LISS IV + PAN)图像进行分类。采用离散轮廓波变换(QIM-DCT)对CARTOSAT-1遥感图像(PAN图像)和IRS P6遥感图像(LISS IV图像)进行量化指标调制,实现遥感图像的融合。为了提高图像融合的执行力,我们采用自适应2型模糊系统,利用贝叶斯信道去除特定的扰动。对所提出方法的结果进行了精度、分类精度和kappa系数的评估。结果表明,深度学习的SMDTR-CNN得到了最好的综合精度和kappa系数。同样,LISS IV + PAN数据集的每一类融合图像的准确率分别提高了2%和5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信