{"title":"Improving support vector machine classification accuracy based on kernel parameters optimization","authors":"Lubna B. Mohammed, K. Raahemifar","doi":"10.22360/springsim.2018.cns.013","DOIUrl":null,"url":null,"abstract":"Support Vector Machine (SVM) learning algorithm is considered as the most popular classification algorithm. It is a supervised learning technique that is mainly based on the conception of decision planes. These decision planes define decision boundaries which are used to separate a set of objects. It is important to extract the main features of the training datasets. These features can be used to define the separation boundaries. The separation boundaries can also be improved by tuning the parameters of the separation hyperplane. In literature, there are different techniques for feature selection and SVM parameters optimization that can be used to improve classification accuracy. There are a wide variety of applications that use SVM classification algorithm, such as text classification, disease diagnosis, gene analysis, and many others. The aim of this paper is to investigate the techniques that can be used to improve the classification accuracy of SVM based on kernel parameters optimization. The datasets are collected from different applications; having different number of classes and different number of features. The analysis and comparison among different kernel parameters were implemented on different datasets to study the effect of the number of features, the number of classes, and kernel parameters on the performance of the classification process.","PeriodicalId":413389,"journal":{"name":"Spring Simulation Multiconference","volume":"149 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spring Simulation Multiconference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22360/springsim.2018.cns.013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Support Vector Machine (SVM) learning algorithm is considered as the most popular classification algorithm. It is a supervised learning technique that is mainly based on the conception of decision planes. These decision planes define decision boundaries which are used to separate a set of objects. It is important to extract the main features of the training datasets. These features can be used to define the separation boundaries. The separation boundaries can also be improved by tuning the parameters of the separation hyperplane. In literature, there are different techniques for feature selection and SVM parameters optimization that can be used to improve classification accuracy. There are a wide variety of applications that use SVM classification algorithm, such as text classification, disease diagnosis, gene analysis, and many others. The aim of this paper is to investigate the techniques that can be used to improve the classification accuracy of SVM based on kernel parameters optimization. The datasets are collected from different applications; having different number of classes and different number of features. The analysis and comparison among different kernel parameters were implemented on different datasets to study the effect of the number of features, the number of classes, and kernel parameters on the performance of the classification process.