{"title":"线性可分SMO的收敛速度","authors":"J. Lázaro, José R. Dorronsoro","doi":"10.1109/IJCNN.2013.6707034","DOIUrl":null,"url":null,"abstract":"It is well known that the dual function value sequence generated by SMO has a linear convergence rate when the kernel matrix is positive definite and sublinear convergence is also known to hold for a general matrix. In this paper we will prove that, when applied to hard-margin, i.e., linearly separable SVM problems, a linear convergence rate holds for the SMO algorithm without any condition on the kernel matrix. Moreover, we will also show linear convergence for the multiplier sequence generated by SMO, the corresponding weight vectors and the KKT gap usually applied to control the number of SMO iterations. This gives a fairly complete picture of the convergence of the various sequences SMO generates. While linear SMO convergence for the general SVM L1 soft margin problem is still open, the approach followed here may lead to such a general result.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The convergence rate of linearly separable SMO\",\"authors\":\"J. Lázaro, José R. Dorronsoro\",\"doi\":\"10.1109/IJCNN.2013.6707034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is well known that the dual function value sequence generated by SMO has a linear convergence rate when the kernel matrix is positive definite and sublinear convergence is also known to hold for a general matrix. In this paper we will prove that, when applied to hard-margin, i.e., linearly separable SVM problems, a linear convergence rate holds for the SMO algorithm without any condition on the kernel matrix. Moreover, we will also show linear convergence for the multiplier sequence generated by SMO, the corresponding weight vectors and the KKT gap usually applied to control the number of SMO iterations. This gives a fairly complete picture of the convergence of the various sequences SMO generates. While linear SMO convergence for the general SVM L1 soft margin problem is still open, the approach followed here may lead to such a general result.\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6707034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6707034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
It is well known that the dual function value sequence generated by SMO has a linear convergence rate when the kernel matrix is positive definite and sublinear convergence is also known to hold for a general matrix. In this paper we will prove that, when applied to hard-margin, i.e., linearly separable SVM problems, a linear convergence rate holds for the SMO algorithm without any condition on the kernel matrix. Moreover, we will also show linear convergence for the multiplier sequence generated by SMO, the corresponding weight vectors and the KKT gap usually applied to control the number of SMO iterations. This gives a fairly complete picture of the convergence of the various sequences SMO generates. While linear SMO convergence for the general SVM L1 soft margin problem is still open, the approach followed here may lead to such a general result.