Ryoma Tani, Hideyuki Watanabe, S. Katagiri, M. Ohsaki
{"title":"以最小分类误差标准训练的紧凑核分类器","authors":"Ryoma Tani, Hideyuki Watanabe, S. Katagiri, M. Ohsaki","doi":"10.1109/MLSP.2017.8168184","DOIUrl":null,"url":null,"abstract":"Unlike Support Vector Machine (SVM), Kernel Minimum Classification Error (KMCE) training frees kernels from training samples and jointly optimizes weights and kernel locations. Focusing on this feature of KMCE training, we propose a new method for developing compact (small scale but highly accurate) kernel classifiers by applying KMCE training to support vectors (SVs) that are selected (based on the weight vector norm) from the original SVs produced by the Multi-class SVM (MSVM). We evaluate our proposed method in four classification tasks and clearly demonstrate its effectiveness: only a 3% drop in classification accuracy (from 99.1 to 89.1%) with just 10% of the original SVs. In addition, we mathematically reveal that the value of MSVM's kernel weight indicates the geometric relation between a training sample and margin boundaries.","PeriodicalId":6542,"journal":{"name":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"71 1 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Compact kernel classifiers trained with minimum classification error criterion\",\"authors\":\"Ryoma Tani, Hideyuki Watanabe, S. Katagiri, M. Ohsaki\",\"doi\":\"10.1109/MLSP.2017.8168184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unlike Support Vector Machine (SVM), Kernel Minimum Classification Error (KMCE) training frees kernels from training samples and jointly optimizes weights and kernel locations. Focusing on this feature of KMCE training, we propose a new method for developing compact (small scale but highly accurate) kernel classifiers by applying KMCE training to support vectors (SVs) that are selected (based on the weight vector norm) from the original SVs produced by the Multi-class SVM (MSVM). We evaluate our proposed method in four classification tasks and clearly demonstrate its effectiveness: only a 3% drop in classification accuracy (from 99.1 to 89.1%) with just 10% of the original SVs. In addition, we mathematically reveal that the value of MSVM's kernel weight indicates the geometric relation between a training sample and margin boundaries.\",\"PeriodicalId\":6542,\"journal\":{\"name\":\"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)\",\"volume\":\"71 1 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MLSP.2017.8168184\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLSP.2017.8168184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Compact kernel classifiers trained with minimum classification error criterion
Unlike Support Vector Machine (SVM), Kernel Minimum Classification Error (KMCE) training frees kernels from training samples and jointly optimizes weights and kernel locations. Focusing on this feature of KMCE training, we propose a new method for developing compact (small scale but highly accurate) kernel classifiers by applying KMCE training to support vectors (SVs) that are selected (based on the weight vector norm) from the original SVs produced by the Multi-class SVM (MSVM). We evaluate our proposed method in four classification tasks and clearly demonstrate its effectiveness: only a 3% drop in classification accuracy (from 99.1 to 89.1%) with just 10% of the original SVs. In addition, we mathematically reveal that the value of MSVM's kernel weight indicates the geometric relation between a training sample and margin boundaries.