Alireza Kazemi, R. Boostani, Mahmoud Odeh, M. Al-Mousa
{"title":"Two-Layer SVM, Towards Deep Statistical Learning","authors":"Alireza Kazemi, R. Boostani, Mahmoud Odeh, M. Al-Mousa","doi":"10.1109/EICEEAI56378.2022.10050469","DOIUrl":null,"url":null,"abstract":"Support Vector Machine (SVM) is originally a binary large-margin classifier emerged from the concept of structural risk minimization. Multiple solutions such as one-versus-one and one-versus-all have been proposed for creating multi-class SVM using elementary binary SVMs. Also multiple solutions have been proposed for SVM model selection, adjusting margin-parameter C and the Gaussian kernel variance. Here, an improved classifier named SVM-SVM is proposed for multi-class problems which increases accuracy and decreases dependency to margin-parameter selection. SVM-SVM adopts two K-class one-vs-one SVMs in a cascaded two-layer structure. In the first layer, input features are fed to one-vs-one SVM with non-linear kernels. We introduce this layer as a large-margin non-linear feature transform that maps input feature space to a discriminative K*(K-1)/2 dimensional space. To assess our hierarchical classifier, some datasets from the UCI repository are evaluated. Standard one-vs-one SVM and one-vs-one fuzzy SVM are used as reference classifiers in experiments. Results show significant improvements of our proposed method in terms of test accuracy and robustness to the model (margin and kernel) parameters in comparison with the reference classifiers. Our observations suggest that a multi-layer (deep) SVM structures can gain the same benefits as is seen in the deep neural nets (DNNs).","PeriodicalId":426838,"journal":{"name":"2022 International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI)","volume":"103 S6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EICEEAI56378.2022.10050469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Support Vector Machine (SVM) is originally a binary large-margin classifier emerged from the concept of structural risk minimization. Multiple solutions such as one-versus-one and one-versus-all have been proposed for creating multi-class SVM using elementary binary SVMs. Also multiple solutions have been proposed for SVM model selection, adjusting margin-parameter C and the Gaussian kernel variance. Here, an improved classifier named SVM-SVM is proposed for multi-class problems which increases accuracy and decreases dependency to margin-parameter selection. SVM-SVM adopts two K-class one-vs-one SVMs in a cascaded two-layer structure. In the first layer, input features are fed to one-vs-one SVM with non-linear kernels. We introduce this layer as a large-margin non-linear feature transform that maps input feature space to a discriminative K*(K-1)/2 dimensional space. To assess our hierarchical classifier, some datasets from the UCI repository are evaluated. Standard one-vs-one SVM and one-vs-one fuzzy SVM are used as reference classifiers in experiments. Results show significant improvements of our proposed method in terms of test accuracy and robustness to the model (margin and kernel) parameters in comparison with the reference classifiers. Our observations suggest that a multi-layer (deep) SVM structures can gain the same benefits as is seen in the deep neural nets (DNNs).