Masahiro Senda, David Ha, Hideyuki Watanabe, S. Katagiri, M. Ohsaki
{"title":"模式分类的最大贝叶斯边界训练","authors":"Masahiro Senda, David Ha, Hideyuki Watanabe, S. Katagiri, M. Ohsaki","doi":"10.1145/3372806.3372817","DOIUrl":null,"url":null,"abstract":"The ultimate goal of pattern classifier parameter training is to achieve its optimal status (value) that produces Bayes error or a corresponding Bayes boundary. To realize this goal without unrealistically long training repetitions and strict parameter assumptions, the Bayes Boundary-ness-based Selection (BBS) method was recently proposed and its effectiveness was clearly demonstrated. However, the BBS method remains cumbersome because it consists of two stages: the first generates many candidate sets of trained parameters by carefully controlling the training hyperparameters so that those candidate sets can include the optimal target parameter set; the second stage selects an optimal set from candidate sets. To resolve the BBS method's burden, we propose a new one-stage training method that directly optimizes a given classifier parameter set by maximizing its Bayes boundary-ness or increasing its accuracy during Bayes error estimation. We experimentally evaluate our proposed method in terms of its accuracy of Bayes error estimation over four synthetic or real-life datasets. Our experimental results clearly show that it successfully overcomes the drawbacks of the preceding BBS method and directly creates optimal classifier parameter status without generating too many candidate parameter sets.","PeriodicalId":340004,"journal":{"name":"International Conference on Signal Processing and Machine Learning","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Maximum Bayes Boundary-Ness Training For Pattern Classification\",\"authors\":\"Masahiro Senda, David Ha, Hideyuki Watanabe, S. Katagiri, M. Ohsaki\",\"doi\":\"10.1145/3372806.3372817\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ultimate goal of pattern classifier parameter training is to achieve its optimal status (value) that produces Bayes error or a corresponding Bayes boundary. To realize this goal without unrealistically long training repetitions and strict parameter assumptions, the Bayes Boundary-ness-based Selection (BBS) method was recently proposed and its effectiveness was clearly demonstrated. However, the BBS method remains cumbersome because it consists of two stages: the first generates many candidate sets of trained parameters by carefully controlling the training hyperparameters so that those candidate sets can include the optimal target parameter set; the second stage selects an optimal set from candidate sets. To resolve the BBS method's burden, we propose a new one-stage training method that directly optimizes a given classifier parameter set by maximizing its Bayes boundary-ness or increasing its accuracy during Bayes error estimation. We experimentally evaluate our proposed method in terms of its accuracy of Bayes error estimation over four synthetic or real-life datasets. Our experimental results clearly show that it successfully overcomes the drawbacks of the preceding BBS method and directly creates optimal classifier parameter status without generating too many candidate parameter sets.\",\"PeriodicalId\":340004,\"journal\":{\"name\":\"International Conference on Signal Processing and Machine Learning\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Signal Processing and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3372806.3372817\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Signal Processing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3372806.3372817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Maximum Bayes Boundary-Ness Training For Pattern Classification
The ultimate goal of pattern classifier parameter training is to achieve its optimal status (value) that produces Bayes error or a corresponding Bayes boundary. To realize this goal without unrealistically long training repetitions and strict parameter assumptions, the Bayes Boundary-ness-based Selection (BBS) method was recently proposed and its effectiveness was clearly demonstrated. However, the BBS method remains cumbersome because it consists of two stages: the first generates many candidate sets of trained parameters by carefully controlling the training hyperparameters so that those candidate sets can include the optimal target parameter set; the second stage selects an optimal set from candidate sets. To resolve the BBS method's burden, we propose a new one-stage training method that directly optimizes a given classifier parameter set by maximizing its Bayes boundary-ness or increasing its accuracy during Bayes error estimation. We experimentally evaluate our proposed method in terms of its accuracy of Bayes error estimation over four synthetic or real-life datasets. Our experimental results clearly show that it successfully overcomes the drawbacks of the preceding BBS method and directly creates optimal classifier parameter status without generating too many candidate parameter sets.