{"title":"决策树训练的优化混合不平衡数据采样","authors":"Weronika Węgier, Michał Koziarski, Michal Wozniak","doi":"10.1145/3583133.3590702","DOIUrl":null,"url":null,"abstract":"For many real-world decision-making tasks, a key feature is decision explainability. Hence, the so-called glass-box models offer full explainability and are still prevalent. An important area of application is the classification of imbalanced data. We require that the proposed classifiers not make errors on the minority class while minimizing errors on the majority class. This paper proposes a method for preprocessing imbalanced data by generating minority class objects. We use a multi-criteria optimization method (NSGA-II) to avoid optimizing a single aggregate criterion. The method returns a group of non-dominated solutions from which the end user can choose the best solution from his point of view. The automatic solution selection from a Pareto front is also proposed for comparison purposes. The proposed method returns good-quality classifiers, often surpassing the quality of baseline single-objective methods, and is additionally characterized by full interpretability.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimized hybrid imbalanced data sampling for decision tree training\",\"authors\":\"Weronika Węgier, Michał Koziarski, Michal Wozniak\",\"doi\":\"10.1145/3583133.3590702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For many real-world decision-making tasks, a key feature is decision explainability. Hence, the so-called glass-box models offer full explainability and are still prevalent. An important area of application is the classification of imbalanced data. We require that the proposed classifiers not make errors on the minority class while minimizing errors on the majority class. This paper proposes a method for preprocessing imbalanced data by generating minority class objects. We use a multi-criteria optimization method (NSGA-II) to avoid optimizing a single aggregate criterion. The method returns a group of non-dominated solutions from which the end user can choose the best solution from his point of view. The automatic solution selection from a Pareto front is also proposed for comparison purposes. The proposed method returns good-quality classifiers, often surpassing the quality of baseline single-objective methods, and is additionally characterized by full interpretability.\",\"PeriodicalId\":422029,\"journal\":{\"name\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3583133.3590702\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583133.3590702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimized hybrid imbalanced data sampling for decision tree training
For many real-world decision-making tasks, a key feature is decision explainability. Hence, the so-called glass-box models offer full explainability and are still prevalent. An important area of application is the classification of imbalanced data. We require that the proposed classifiers not make errors on the minority class while minimizing errors on the majority class. This paper proposes a method for preprocessing imbalanced data by generating minority class objects. We use a multi-criteria optimization method (NSGA-II) to avoid optimizing a single aggregate criterion. The method returns a group of non-dominated solutions from which the end user can choose the best solution from his point of view. The automatic solution selection from a Pareto front is also proposed for comparison purposes. The proposed method returns good-quality classifiers, often surpassing the quality of baseline single-objective methods, and is additionally characterized by full interpretability.