Xiangbin Shi, Kuo Song, Zhaokui Li, Jing Bi, Deyuan Zhang
{"title":"M^2 S^2 F^2 : Multiscale Multistage Spectral-Spatial Features Fusion Framework for Hyperspectral Image Classification","authors":"Xiangbin Shi, Kuo Song, Zhaokui Li, Jing Bi, Deyuan Zhang","doi":"10.1109/IUCC/DSCI/SmartCNS.2019.00116","DOIUrl":null,"url":null,"abstract":"Hyperspectral image classification has been widely applied in many fields, but it also faces challenges because of small number of labeled samples. In this paper, we propose the Multiscale Multistage Spectral-Spatial Feature Fusion Framework (M^2 S^2 F^2 ) for hyperspectral image classification using small training samples. The Framework is the combination of two deep convolutional neural networks, which can extract more representative and discriminative features by combining the following operations. Firstly, two different scale 3-D cubes are the inputs for the spectral and spatial feature extraction respectively. Secondly, by fusing strong complementary information between different layers, we form multistage spectral and spatial features by fusion primary, intermediate and advanced features of the spectral and spatial features respectively. Spectral and spatial features are extracted by spectral and spatial skipped residual blocks, which can effectively alleviate the problems of gradient degradation. Thirdly, the fusion of complementary multistage spectral and spatial features can improve the classification accuracy. Experimental results on the IN, UP and KSC datasets show the effectiveness of the proposed method using small training samples.","PeriodicalId":410905,"journal":{"name":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUCC/DSCI/SmartCNS.2019.00116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Hyperspectral image classification has been widely applied in many fields, but it also faces challenges because of small number of labeled samples. In this paper, we propose the Multiscale Multistage Spectral-Spatial Feature Fusion Framework (M^2 S^2 F^2 ) for hyperspectral image classification using small training samples. The Framework is the combination of two deep convolutional neural networks, which can extract more representative and discriminative features by combining the following operations. Firstly, two different scale 3-D cubes are the inputs for the spectral and spatial feature extraction respectively. Secondly, by fusing strong complementary information between different layers, we form multistage spectral and spatial features by fusion primary, intermediate and advanced features of the spectral and spatial features respectively. Spectral and spatial features are extracted by spectral and spatial skipped residual blocks, which can effectively alleviate the problems of gradient degradation. Thirdly, the fusion of complementary multistage spectral and spatial features can improve the classification accuracy. Experimental results on the IN, UP and KSC datasets show the effectiveness of the proposed method using small training samples.