{"title":"基于MOCA的非参数单调分类","authors":"N. Barile, A. Feelders","doi":"10.1109/ICDM.2008.54","DOIUrl":null,"url":null,"abstract":"We describe a monotone classification algorithm called MOCA that attempts to minimize the mean absolute prediction error for classification problems with ordered class labels.We first find a monotone classifier with minimum L1 loss on the training sample, and then use a simple interpolation scheme to predict the class labels for attribute vectors not present in the training data.We compare MOCA to the ordinal stochastic dominance learner (OSDL), on artificial as well as real data sets. We show that MOCA often outperforms OSDL with respect to mean absolute prediction error.","PeriodicalId":252958,"journal":{"name":"2008 Eighth IEEE International Conference on Data Mining","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Nonparametric Monotone Classification with MOCA\",\"authors\":\"N. Barile, A. Feelders\",\"doi\":\"10.1109/ICDM.2008.54\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We describe a monotone classification algorithm called MOCA that attempts to minimize the mean absolute prediction error for classification problems with ordered class labels.We first find a monotone classifier with minimum L1 loss on the training sample, and then use a simple interpolation scheme to predict the class labels for attribute vectors not present in the training data.We compare MOCA to the ordinal stochastic dominance learner (OSDL), on artificial as well as real data sets. We show that MOCA often outperforms OSDL with respect to mean absolute prediction error.\",\"PeriodicalId\":252958,\"journal\":{\"name\":\"2008 Eighth IEEE International Conference on Data Mining\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 Eighth IEEE International Conference on Data Mining\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM.2008.54\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Eighth IEEE International Conference on Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM.2008.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We describe a monotone classification algorithm called MOCA that attempts to minimize the mean absolute prediction error for classification problems with ordered class labels.We first find a monotone classifier with minimum L1 loss on the training sample, and then use a simple interpolation scheme to predict the class labels for attribute vectors not present in the training data.We compare MOCA to the ordinal stochastic dominance learner (OSDL), on artificial as well as real data sets. We show that MOCA often outperforms OSDL with respect to mean absolute prediction error.