{"title":"A note on the classification error of an SVM in one dimension","authors":"T. Cooke","doi":"10.1109/IDC.2002.995376","DOIUrl":null,"url":null,"abstract":"There are many algorithms available for detecting noise corrupted signals in background clutter. In cases where the exact statistics of the noise and clutter are unknown, the optimal detector may be estimated from a set of samples of each. One method for doing this is the support vector machine (SVM), which has a detection performance that is dependent on some regularisation parameter C, and cannot be determined a-priori. The standard method of choosing C is by trying values and choosing the one which minimises the detection error on a cross-validation set. It is often assumed that as the size of the training set increases, the resulting discriminant will give the best possible detection rate on an independent test set. This paper investigates two simple 1D examples: uniform and normal distributions. An example is provided where the optimum detection rate cannot be achieved by an SVM regardless of the C chosen value.","PeriodicalId":385351,"journal":{"name":"Final Program and Abstracts on Information, Decision and Control","volume":"36 11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Final Program and Abstracts on Information, Decision and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IDC.2002.995376","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There are many algorithms available for detecting noise corrupted signals in background clutter. In cases where the exact statistics of the noise and clutter are unknown, the optimal detector may be estimated from a set of samples of each. One method for doing this is the support vector machine (SVM), which has a detection performance that is dependent on some regularisation parameter C, and cannot be determined a-priori. The standard method of choosing C is by trying values and choosing the one which minimises the detection error on a cross-validation set. It is often assumed that as the size of the training set increases, the resulting discriminant will give the best possible detection rate on an independent test set. This paper investigates two simple 1D examples: uniform and normal distributions. An example is provided where the optimum detection rate cannot be achieved by an SVM regardless of the C chosen value.