{"title":"支持向量机参数选择的主动测试","authors":"P. Miranda, R. Prudêncio","doi":"10.1109/IJCNN.2013.6706910","DOIUrl":null,"url":null,"abstract":"The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. It has been shown that meta-learning can be used to support the selection of SVM parameters. However, it is very dependent on the quality of the dataset and the meta-features used to characterize the dataset. As alternative for this problem, a recent technique called Active Testing characterized a dataset based on the pairwise performance differences between possible solutions. This approach selects the most useful cross-validation tests. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. In this paper we propose the application of Active Testing for the SVM parameter problem. We test it on the problem of setting the RBF kernel parameters for classification problems and we compare its similarity strategy with based on data characteristics. The results showed the variants of Active Testing that rely on cross-validation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Active testing for SVM parameter selection\",\"authors\":\"P. Miranda, R. Prudêncio\",\"doi\":\"10.1109/IJCNN.2013.6706910\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. It has been shown that meta-learning can be used to support the selection of SVM parameters. However, it is very dependent on the quality of the dataset and the meta-features used to characterize the dataset. As alternative for this problem, a recent technique called Active Testing characterized a dataset based on the pairwise performance differences between possible solutions. This approach selects the most useful cross-validation tests. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. In this paper we propose the application of Active Testing for the SVM parameter problem. We test it on the problem of setting the RBF kernel parameters for classification problems and we compare its similarity strategy with based on data characteristics. The results showed the variants of Active Testing that rely on cross-validation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6706910\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6706910","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. It has been shown that meta-learning can be used to support the selection of SVM parameters. However, it is very dependent on the quality of the dataset and the meta-features used to characterize the dataset. As alternative for this problem, a recent technique called Active Testing characterized a dataset based on the pairwise performance differences between possible solutions. This approach selects the most useful cross-validation tests. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. In this paper we propose the application of Active Testing for the SVM parameter problem. We test it on the problem of setting the RBF kernel parameters for classification problems and we compare its similarity strategy with based on data characteristics. The results showed the variants of Active Testing that rely on cross-validation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.