{"title":"多类数据分段线性分类器的自动设计","authors":"Youngtae Park, J. Sklansky","doi":"10.1109/ICPR.1988.28443","DOIUrl":null,"url":null,"abstract":"A method for designing multiple-class piecewise-linear classifiers is described. It involves the cutting or arcs joining pairs of opposed points in d-dimensional space. Such arcs are referred to as links. It is shown how to nearly minimize the number of hyperplanes required to cut all of these links, thereby yielding a near-Bayes-optimal decision surface regardless of the number of classes. The underlying theory is described. This method does not require parameters to be specified by users. Experiments on multiple-class data obtained from ship images show that classifiers designed by this method yield approximately the same error rate as the best k-nearest-neighbor rule, while possessing greater computational efficiency of classification.<<ETX>>","PeriodicalId":314236,"journal":{"name":"[1988 Proceedings] 9th International Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1988-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Automated design of piecewise-linear classifiers of multiple-class data\",\"authors\":\"Youngtae Park, J. Sklansky\",\"doi\":\"10.1109/ICPR.1988.28443\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A method for designing multiple-class piecewise-linear classifiers is described. It involves the cutting or arcs joining pairs of opposed points in d-dimensional space. Such arcs are referred to as links. It is shown how to nearly minimize the number of hyperplanes required to cut all of these links, thereby yielding a near-Bayes-optimal decision surface regardless of the number of classes. The underlying theory is described. This method does not require parameters to be specified by users. Experiments on multiple-class data obtained from ship images show that classifiers designed by this method yield approximately the same error rate as the best k-nearest-neighbor rule, while possessing greater computational efficiency of classification.<<ETX>>\",\"PeriodicalId\":314236,\"journal\":{\"name\":\"[1988 Proceedings] 9th International Conference on Pattern Recognition\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1988-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[1988 Proceedings] 9th International Conference on Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.1988.28443\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[1988 Proceedings] 9th International Conference on Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.1988.28443","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automated design of piecewise-linear classifiers of multiple-class data
A method for designing multiple-class piecewise-linear classifiers is described. It involves the cutting or arcs joining pairs of opposed points in d-dimensional space. Such arcs are referred to as links. It is shown how to nearly minimize the number of hyperplanes required to cut all of these links, thereby yielding a near-Bayes-optimal decision surface regardless of the number of classes. The underlying theory is described. This method does not require parameters to be specified by users. Experiments on multiple-class data obtained from ship images show that classifiers designed by this method yield approximately the same error rate as the best k-nearest-neighbor rule, while possessing greater computational efficiency of classification.<>