{"title":"软上限最小复杂度LP支持向量机","authors":"S. Abe","doi":"10.1109/IJCNN52387.2021.9533540","DOIUrl":null,"url":null,"abstract":"The minimal complexity linear programming support vector machine (MLP SVM) was proposed to solve the problem of unbounded non-unique solutions of the minimal complexity machine (MCM). The MLP SVM minimizes the maximum margin that is the maximum distance between training data and the separating hyperplane as well as maximizes the minimum margin. Therefore, the generalization ability may be worsened if outliers are included and they affect the slope and the location of the separating hyperplane. To solve this problem, in this paper, we propose the soft upper-bound MLP SVM (SLP SVM), in which the outliers that affect the hyperplane are suppressed by introducing the slack variables. This introduction leads to the increase of hyperparameters. We discuss how to reduce the number of hyperparameters to speed up model selection. By computer experiments we compare the generalization ability and training time of the SLP SVM with those of the MLP SVM, MCM, and other SVM based classifiers using two-class and multiclass problems.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Soft Upper-bound Minimal Complexity LP SVMs\",\"authors\":\"S. Abe\",\"doi\":\"10.1109/IJCNN52387.2021.9533540\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The minimal complexity linear programming support vector machine (MLP SVM) was proposed to solve the problem of unbounded non-unique solutions of the minimal complexity machine (MCM). The MLP SVM minimizes the maximum margin that is the maximum distance between training data and the separating hyperplane as well as maximizes the minimum margin. Therefore, the generalization ability may be worsened if outliers are included and they affect the slope and the location of the separating hyperplane. To solve this problem, in this paper, we propose the soft upper-bound MLP SVM (SLP SVM), in which the outliers that affect the hyperplane are suppressed by introducing the slack variables. This introduction leads to the increase of hyperparameters. We discuss how to reduce the number of hyperparameters to speed up model selection. By computer experiments we compare the generalization ability and training time of the SLP SVM with those of the MLP SVM, MCM, and other SVM based classifiers using two-class and multiclass problems.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"116 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9533540\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9533540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The minimal complexity linear programming support vector machine (MLP SVM) was proposed to solve the problem of unbounded non-unique solutions of the minimal complexity machine (MCM). The MLP SVM minimizes the maximum margin that is the maximum distance between training data and the separating hyperplane as well as maximizes the minimum margin. Therefore, the generalization ability may be worsened if outliers are included and they affect the slope and the location of the separating hyperplane. To solve this problem, in this paper, we propose the soft upper-bound MLP SVM (SLP SVM), in which the outliers that affect the hyperplane are suppressed by introducing the slack variables. This introduction leads to the increase of hyperparameters. We discuss how to reduce the number of hyperparameters to speed up model selection. By computer experiments we compare the generalization ability and training time of the SLP SVM with those of the MLP SVM, MCM, and other SVM based classifiers using two-class and multiclass problems.