Guilherme Camargo, R. S. Bressan, P. Bugatti, P. T. Saito
{"title":"Towards an Effective and Efficient Learning for Biomedical Data Classification","authors":"Guilherme Camargo, R. S. Bressan, P. Bugatti, P. T. Saito","doi":"10.1109/CBMS.2017.54","DOIUrl":null,"url":null,"abstract":"Nowadays a huge volume of biomedical data (images, genes, etc) are daily generated. The interpretation of such data involves a considerable expertise. The misinterpretation and/or misdetection of a suspicious clinical finding leads to increasing the negligence claims, and redundant procedures (e.g. biopsies). The analysis of biomedical data is a complex task which are performed by specialists on whose expertise degree affects the accuracy of their diagnosis. Besides, due to the huge volume of data, it is a tiresome process. To mitigate these intrinsic drawbacks Computeraided Diagnosis approaches have been proposed in the last decade, but applied without a deep analysis. It is also very common in the literature for the presentation of experimental results to rely solely on the mean of accuracy values. This procedure is not always reliable, especially for applications that require faster classifiers due to their learning-time constraints. Hence, in this paper we proposed an extensive analysis towards an effective and efficient learning for biomedical data classification. To do so, several public biomedical datasets were used against different supervised classifiers, taking into account accuracies and computational times obtained throughout the learning process.","PeriodicalId":141105,"journal":{"name":"2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS.2017.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Nowadays a huge volume of biomedical data (images, genes, etc) are daily generated. The interpretation of such data involves a considerable expertise. The misinterpretation and/or misdetection of a suspicious clinical finding leads to increasing the negligence claims, and redundant procedures (e.g. biopsies). The analysis of biomedical data is a complex task which are performed by specialists on whose expertise degree affects the accuracy of their diagnosis. Besides, due to the huge volume of data, it is a tiresome process. To mitigate these intrinsic drawbacks Computeraided Diagnosis approaches have been proposed in the last decade, but applied without a deep analysis. It is also very common in the literature for the presentation of experimental results to rely solely on the mean of accuracy values. This procedure is not always reliable, especially for applications that require faster classifiers due to their learning-time constraints. Hence, in this paper we proposed an extensive analysis towards an effective and efficient learning for biomedical data classification. To do so, several public biomedical datasets were used against different supervised classifiers, taking into account accuracies and computational times obtained throughout the learning process.