{"title":"几何支持向量机:一个快速和直观的支持向量机算法","authors":"S. Vishwanathan, M. Murty","doi":"10.1109/ICPR.2002.1048235","DOIUrl":null,"url":null,"abstract":"We present a geometrically motivated algorithm for finding the Support Vectors of a given set of points. This algorithm is reminiscent of the DirectSVM algorithm, in the way it picks data points for inclusion in the Support Vector set, but it uses an optimization based approach to add them to the Support Vector set. This ensures that the algorithm scales to O(n/sup 3/) in the worst case and O(n|S|/sup 2/) in the average case where n is the total number of points in the data set and |S| is the number of Support Vectors. Further the memory requirements also scale as O(n/sup 2/) in the worst case and O(|S|/sup 2/) in the average case. The advantage of this algorithm is that it is more intuitive and performs extremely well when the number of Support Vectors is only a small fraction of the entire data set. It can also be used to calculate leave one out error based on the order in which data points were added to the Support Vector set. We also present results on real life data sets to validate our claims.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Geometric SVM: a fast and intuitive SVM algorithm\",\"authors\":\"S. Vishwanathan, M. Murty\",\"doi\":\"10.1109/ICPR.2002.1048235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a geometrically motivated algorithm for finding the Support Vectors of a given set of points. This algorithm is reminiscent of the DirectSVM algorithm, in the way it picks data points for inclusion in the Support Vector set, but it uses an optimization based approach to add them to the Support Vector set. This ensures that the algorithm scales to O(n/sup 3/) in the worst case and O(n|S|/sup 2/) in the average case where n is the total number of points in the data set and |S| is the number of Support Vectors. Further the memory requirements also scale as O(n/sup 2/) in the worst case and O(|S|/sup 2/) in the average case. The advantage of this algorithm is that it is more intuitive and performs extremely well when the number of Support Vectors is only a small fraction of the entire data set. It can also be used to calculate leave one out error based on the order in which data points were added to the Support Vector set. We also present results on real life data sets to validate our claims.\",\"PeriodicalId\":159502,\"journal\":{\"name\":\"Object recognition supported by user interaction for service robots\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Object recognition supported by user interaction for service robots\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.2002.1048235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Object recognition supported by user interaction for service robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.2002.1048235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We present a geometrically motivated algorithm for finding the Support Vectors of a given set of points. This algorithm is reminiscent of the DirectSVM algorithm, in the way it picks data points for inclusion in the Support Vector set, but it uses an optimization based approach to add them to the Support Vector set. This ensures that the algorithm scales to O(n/sup 3/) in the worst case and O(n|S|/sup 2/) in the average case where n is the total number of points in the data set and |S| is the number of Support Vectors. Further the memory requirements also scale as O(n/sup 2/) in the worst case and O(|S|/sup 2/) in the average case. The advantage of this algorithm is that it is more intuitive and performs extremely well when the number of Support Vectors is only a small fraction of the entire data set. It can also be used to calculate leave one out error based on the order in which data points were added to the Support Vector set. We also present results on real life data sets to validate our claims.