{"title":"基于卷积神经网络的多时相SAR与光学图像的融合与分类","authors":"Achala Shakya, M. Biswas, M. Pal","doi":"10.1080/19479832.2021.2019133","DOIUrl":null,"url":null,"abstract":"ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network\",\"authors\":\"Achala Shakya, M. Biswas, M. Pal\",\"doi\":\"10.1080/19479832.2021.2019133\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.\",\"PeriodicalId\":46012,\"journal\":{\"name\":\"International Journal of Image and Data Fusion\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2021-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Image and Data Fusion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19479832.2021.2019133\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2021.2019133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network
ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).