Gunjan Thakur, Bernie J. Daigle, Meng Qian, Kelsey R. Dean, Yuanyang Zhang, Ruoting Yang, Taek‐Kyun Kim, Xiaogang Wu, Meng Li, Inyoul Y. Lee, L. Petzold, Francis J. Doyle
{"title":"A Multimetric Evaluation of Stratified Random Sampling for Classification: A Case Study","authors":"Gunjan Thakur, Bernie J. Daigle, Meng Qian, Kelsey R. Dean, Yuanyang Zhang, Ruoting Yang, Taek‐Kyun Kim, Xiaogang Wu, Meng Li, Inyoul Y. Lee, L. Petzold, Francis J. Doyle","doi":"10.1109/LLS.2016.2615086","DOIUrl":null,"url":null,"abstract":"Accurate classification of biological phenotypes is an essential task for medical decision making. The selection of subjects for classifier training and validation sets is a crucial step within this task. To evaluate the impact of two approaches for subject selection—randomization and clinical balancing, we applied six classification algorithms to a highly replicated publicly available breast cancer data set. Using six performance metrics, we demonstrate that clinical balancing improves both training and validation performance for all methods on average. We also observed a smaller discrepancy between training and validation performance. Furthermore, a simple analytical argument is presented which suggests that we need only two metrics from the class of metrics based on the entries of the confusion matrix. In light of our results, we recommend: 1) clinical balancing of training and validation data to improve signal-to-noise ratio and 2) the use of multiple classification algorithms and evaluation metrics for a comprehensive evaluation of the decision making process.","PeriodicalId":87271,"journal":{"name":"IEEE life sciences letters","volume":"2 1","pages":"43-46"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LLS.2016.2615086","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE life sciences letters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LLS.2016.2615086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Accurate classification of biological phenotypes is an essential task for medical decision making. The selection of subjects for classifier training and validation sets is a crucial step within this task. To evaluate the impact of two approaches for subject selection—randomization and clinical balancing, we applied six classification algorithms to a highly replicated publicly available breast cancer data set. Using six performance metrics, we demonstrate that clinical balancing improves both training and validation performance for all methods on average. We also observed a smaller discrepancy between training and validation performance. Furthermore, a simple analytical argument is presented which suggests that we need only two metrics from the class of metrics based on the entries of the confusion matrix. In light of our results, we recommend: 1) clinical balancing of training and validation data to improve signal-to-noise ratio and 2) the use of multiple classification algorithms and evaluation metrics for a comprehensive evaluation of the decision making process.