Analysis of the Effects of Wavelength Band Selection and Data Fusion Techniques on Multiple-Modality Homeland Security Airborne Scenes via Deep Learning Models
{"title":"Analysis of the Effects of Wavelength Band Selection and Data Fusion Techniques on Multiple-Modality Homeland Security Airborne Scenes via Deep Learning Models","authors":"Christopher D. Good, D. B. Megherbi","doi":"10.1109/HST56032.2022.10025446","DOIUrl":null,"url":null,"abstract":"In this work, we study the problem of band selection in multimodal remote sensing scenes. We present a deep learning system based on a three-dimensional variation of the DenseNet model architecture that we further modify to incorporate early and late feature fusion for multimodal learning of land cover classification. Band selection is applied during data preprocessing in order to counteract the Hughes' phenomenon (also known as the “Curse of Dimensionality”), with the intent of improving classification performance. We evaluate this deep learning data fusion system with the IEEE Geoscience and Remote Sensing Society (GRSS) data fusion contest (DFC) 2018 University of Houston dataset, a multimodal urban land usage and land cover (LULC) dataset. The experimental test harness for this work uses the TensorFlow and Keras deep learning frameworks to implement the proposed system, and our models are trained in the cloud via Google Colab notebooks. Our findings show that intelligent selection of hyperspectral bands and careful arrangement of feature fusion can result in an 8%-15% improvement in classification accuracy from the GRSS DFC 2018 contest winners when ignoring ad-hoc postprocessing. Finally, we present tables and plots comparing the efficacy of various modality fusion combinations and band selection methods to provide an in-depth analysis of how different bands and sensor modalities affect classification.","PeriodicalId":162426,"journal":{"name":"2022 IEEE International Symposium on Technologies for Homeland Security (HST)","volume":"542 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Technologies for Homeland Security (HST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HST56032.2022.10025446","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this work, we study the problem of band selection in multimodal remote sensing scenes. We present a deep learning system based on a three-dimensional variation of the DenseNet model architecture that we further modify to incorporate early and late feature fusion for multimodal learning of land cover classification. Band selection is applied during data preprocessing in order to counteract the Hughes' phenomenon (also known as the “Curse of Dimensionality”), with the intent of improving classification performance. We evaluate this deep learning data fusion system with the IEEE Geoscience and Remote Sensing Society (GRSS) data fusion contest (DFC) 2018 University of Houston dataset, a multimodal urban land usage and land cover (LULC) dataset. The experimental test harness for this work uses the TensorFlow and Keras deep learning frameworks to implement the proposed system, and our models are trained in the cloud via Google Colab notebooks. Our findings show that intelligent selection of hyperspectral bands and careful arrangement of feature fusion can result in an 8%-15% improvement in classification accuracy from the GRSS DFC 2018 contest winners when ignoring ad-hoc postprocessing. Finally, we present tables and plots comparing the efficacy of various modality fusion combinations and band selection methods to provide an in-depth analysis of how different bands and sensor modalities affect classification.