{"title":"训练集大小对基于目标的土地覆盖分类的影响:三种分类器的比较","authors":"G. Myburgh, A. Niekerk","doi":"10.4018/IJAGR.2014070104","DOIUrl":null,"url":null,"abstract":"Supervised classifiers are commonly employed in remote sensing to extract land cover information, but various factors affect their accuracy. The number of available training samples, in particular, is known to have a significant impact on classification accuracies. Obtaining a sufficient number of samples is, however, not always practical. The support vector machine (SVM) is a supervised classifier known to perform well with limited training samples and has been compared favourably to other classifiers for various problems in pixel-based land cover classification. Very little research on training-sample size and classifier performance has been done in a geographical object-based image analysis (GEOBIA) environment. This paper compares the performance of SVM, nearest neighbour (NN) and maximum likelihood (ML) classifiers in a GEOBIA environment, with a focus on the influence of training-set size. Training-set sizes ranging from 4-20 per land cover class were tested. Classification tree analysis (CTA) was used for feature selection. The results indicate that the performance of all the classifiers improved significantly as the size of the training set increased. The ML classifier performed poorly when few (<10 per class) training samples were used and the NN classifier performed poorly compared to SVM throughout the experiment. SVM was the superior classifier for all training-set sizes although ML achieved competitive results for sets of 12 or more training areas per class.","PeriodicalId":368300,"journal":{"name":"Int. J. Appl. Geospat. Res.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Impact of Training Set Size on Object-Based Land Cover Classification: A Comparison of Three Classifiers\",\"authors\":\"G. Myburgh, A. Niekerk\",\"doi\":\"10.4018/IJAGR.2014070104\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Supervised classifiers are commonly employed in remote sensing to extract land cover information, but various factors affect their accuracy. The number of available training samples, in particular, is known to have a significant impact on classification accuracies. Obtaining a sufficient number of samples is, however, not always practical. The support vector machine (SVM) is a supervised classifier known to perform well with limited training samples and has been compared favourably to other classifiers for various problems in pixel-based land cover classification. Very little research on training-sample size and classifier performance has been done in a geographical object-based image analysis (GEOBIA) environment. This paper compares the performance of SVM, nearest neighbour (NN) and maximum likelihood (ML) classifiers in a GEOBIA environment, with a focus on the influence of training-set size. Training-set sizes ranging from 4-20 per land cover class were tested. Classification tree analysis (CTA) was used for feature selection. The results indicate that the performance of all the classifiers improved significantly as the size of the training set increased. The ML classifier performed poorly when few (<10 per class) training samples were used and the NN classifier performed poorly compared to SVM throughout the experiment. SVM was the superior classifier for all training-set sizes although ML achieved competitive results for sets of 12 or more training areas per class.\",\"PeriodicalId\":368300,\"journal\":{\"name\":\"Int. J. Appl. Geospat. Res.\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Appl. Geospat. Res.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4018/IJAGR.2014070104\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Appl. Geospat. Res.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/IJAGR.2014070104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Impact of Training Set Size on Object-Based Land Cover Classification: A Comparison of Three Classifiers
Supervised classifiers are commonly employed in remote sensing to extract land cover information, but various factors affect their accuracy. The number of available training samples, in particular, is known to have a significant impact on classification accuracies. Obtaining a sufficient number of samples is, however, not always practical. The support vector machine (SVM) is a supervised classifier known to perform well with limited training samples and has been compared favourably to other classifiers for various problems in pixel-based land cover classification. Very little research on training-sample size and classifier performance has been done in a geographical object-based image analysis (GEOBIA) environment. This paper compares the performance of SVM, nearest neighbour (NN) and maximum likelihood (ML) classifiers in a GEOBIA environment, with a focus on the influence of training-set size. Training-set sizes ranging from 4-20 per land cover class were tested. Classification tree analysis (CTA) was used for feature selection. The results indicate that the performance of all the classifiers improved significantly as the size of the training set increased. The ML classifier performed poorly when few (<10 per class) training samples were used and the NN classifier performed poorly compared to SVM throughout the experiment. SVM was the superior classifier for all training-set sizes although ML achieved competitive results for sets of 12 or more training areas per class.