{"title":"标签噪声下多标签分类器的鲁棒学习","authors":"Himanshu Kumar, Naresh Manwani, P. Sastry","doi":"10.1145/3371158.3371169","DOIUrl":null,"url":null,"abstract":"In this paper, we address the problem of robust learning of multi-label classifiers when the training data has label noise. We consider learning algorithms in the risk-minimization framework. We define what we call symmetric label noise in multi-label settings which is a useful noise model for many random errors in the labeling of data. We prove that risk minimization is robust to symmetric label noise if the loss function satisfies some conditions. We show that Hamming loss and a surrogate of Hamming loss satisfy these sufficient conditions and hence are robust. By learning feedforward neural networks on some benchmark multi-label datasets, we provide empirical evidence to illustrate our theoretical results on the robust learning of multi-label classifiers under label noise.","PeriodicalId":360747,"journal":{"name":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Robust Learning of Multi-Label Classifiers under Label Noise\",\"authors\":\"Himanshu Kumar, Naresh Manwani, P. Sastry\",\"doi\":\"10.1145/3371158.3371169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we address the problem of robust learning of multi-label classifiers when the training data has label noise. We consider learning algorithms in the risk-minimization framework. We define what we call symmetric label noise in multi-label settings which is a useful noise model for many random errors in the labeling of data. We prove that risk minimization is robust to symmetric label noise if the loss function satisfies some conditions. We show that Hamming loss and a surrogate of Hamming loss satisfy these sufficient conditions and hence are robust. By learning feedforward neural networks on some benchmark multi-label datasets, we provide empirical evidence to illustrate our theoretical results on the robust learning of multi-label classifiers under label noise.\",\"PeriodicalId\":360747,\"journal\":{\"name\":\"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3371158.3371169\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3371158.3371169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Learning of Multi-Label Classifiers under Label Noise
In this paper, we address the problem of robust learning of multi-label classifiers when the training data has label noise. We consider learning algorithms in the risk-minimization framework. We define what we call symmetric label noise in multi-label settings which is a useful noise model for many random errors in the labeling of data. We prove that risk minimization is robust to symmetric label noise if the loss function satisfies some conditions. We show that Hamming loss and a surrogate of Hamming loss satisfy these sufficient conditions and hence are robust. By learning feedforward neural networks on some benchmark multi-label datasets, we provide empirical evidence to illustrate our theoretical results on the robust learning of multi-label classifiers under label noise.