{"title":"半监督支持向量机的稀疏正则化路径","authors":"G. Gasso, Karina Zapien Arreola, S. Canu","doi":"10.1109/ICMLA.2007.81","DOIUrl":null,"url":null,"abstract":"Using unlabeled data to unravel the structure of the data to leverage the learning process is the goal of semi supervised learning. A common way to represent this underlying structure is to use graphs. Flexibility of the maximum margin kernel framework allows to model graph smoothness and to build kernel machine for semi supervised learning such as Laplacian SVM [1]. But a common complaint of the practitioner is the long running time of these kernel algorithms for classification of new points. We provide an efficient way of alleviating this problem by using a LI penalization term and a regularization path algorithm to efficiently compute the solution. Empirical evidence shows the benefit of the algorithm.","PeriodicalId":448863,"journal":{"name":"Sixth International Conference on Machine Learning and Applications (ICMLA 2007)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Sparsity regularization path for semi-supervised SVM\",\"authors\":\"G. Gasso, Karina Zapien Arreola, S. Canu\",\"doi\":\"10.1109/ICMLA.2007.81\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using unlabeled data to unravel the structure of the data to leverage the learning process is the goal of semi supervised learning. A common way to represent this underlying structure is to use graphs. Flexibility of the maximum margin kernel framework allows to model graph smoothness and to build kernel machine for semi supervised learning such as Laplacian SVM [1]. But a common complaint of the practitioner is the long running time of these kernel algorithms for classification of new points. We provide an efficient way of alleviating this problem by using a LI penalization term and a regularization path algorithm to efficiently compute the solution. Empirical evidence shows the benefit of the algorithm.\",\"PeriodicalId\":448863,\"journal\":{\"name\":\"Sixth International Conference on Machine Learning and Applications (ICMLA 2007)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sixth International Conference on Machine Learning and Applications (ICMLA 2007)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2007.81\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sixth International Conference on Machine Learning and Applications (ICMLA 2007)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2007.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sparsity regularization path for semi-supervised SVM
Using unlabeled data to unravel the structure of the data to leverage the learning process is the goal of semi supervised learning. A common way to represent this underlying structure is to use graphs. Flexibility of the maximum margin kernel framework allows to model graph smoothness and to build kernel machine for semi supervised learning such as Laplacian SVM [1]. But a common complaint of the practitioner is the long running time of these kernel algorithms for classification of new points. We provide an efficient way of alleviating this problem by using a LI penalization term and a regularization path algorithm to efficiently compute the solution. Empirical evidence shows the benefit of the algorithm.