{"title":"Learning Sparse Support Vector Machine with Relaxation and Rounding","authors":"Xiangyu Tian, Shizhong Liao","doi":"10.1109/ICTAI.2019.00139","DOIUrl":null,"url":null,"abstract":"A sparse representation of Support Vector Machines (sparse SVMs) is desirable for many applications. However, for large-scale problems with high-dimensional features solving sparse SVMs remains a challenging problem, and most of the existing work are heuristic in that there are no performance guarantees and can't effectively control the trade-off between the sparsity and the accuracy of the decision hyperplane. To address this issue, we propose a new method for via relaxation and rounding, which obtains (ε,δ-approximate solution in Õ(n/εδ) time with probability at least 1-δ. Such regularization explicitly penalizes parameters different from zero with no further restrictions. We first show that learning sparse SVMs with ℓ_0 norm can be reformulated as an exactly Boolean program by introducing Boolean variables to each parameter. With dual and Boolean relaxation, this Boolean problem can be relaxed as a convex programming. For the ε-approximate solution of this convex programming, we get a feasible solution of the original problem without loss accuracy by a determined rounding. We analyze the proposed method in details and give a provable guarantee which is missing from the previous work. Experimental results on both synthetic data and real world data support our theoretical results and verify the validity of the proposed method.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A sparse representation of Support Vector Machines (sparse SVMs) is desirable for many applications. However, for large-scale problems with high-dimensional features solving sparse SVMs remains a challenging problem, and most of the existing work are heuristic in that there are no performance guarantees and can't effectively control the trade-off between the sparsity and the accuracy of the decision hyperplane. To address this issue, we propose a new method for via relaxation and rounding, which obtains (ε,δ-approximate solution in Õ(n/εδ) time with probability at least 1-δ. Such regularization explicitly penalizes parameters different from zero with no further restrictions. We first show that learning sparse SVMs with ℓ_0 norm can be reformulated as an exactly Boolean program by introducing Boolean variables to each parameter. With dual and Boolean relaxation, this Boolean problem can be relaxed as a convex programming. For the ε-approximate solution of this convex programming, we get a feasible solution of the original problem without loss accuracy by a determined rounding. We analyze the proposed method in details and give a provable guarantee which is missing from the previous work. Experimental results on both synthetic data and real world data support our theoretical results and verify the validity of the proposed method.