{"title":"Comparison of Handcrafted Features and Deep Learning in Classification of Medical X-ray Images","authors":"M. Zare, D. O. Alebiosu, Sheng Long Lee","doi":"10.1109/INFRKM.2018.8464688","DOIUrl":null,"url":null,"abstract":"The rapid growth and spread of radiographic equipment in medical centres have resulted in a corresponding increase in the number of medical X-ray images produced. Therefore, more efficient and effective image classification techniques are required. Three different techniques for automatic classification of medical X-ray images were compared. A bag-of-visual-words model and a Convolutional Neural Network (CNN) were used to extract features from the images. The two groups of extracted feature vectors were each used to train a linear support vector machine classifier. Third, a fine-tuned CNN was used for end-to-end classification. A pre-trained CNN was used to overcome dataset limitations. The three techniques were evaluated on the ImageCLEF 2007 medical database. The database provides medical X-ray images in 116 categories. The experimental results showed that fine-tuned CNN outperforms the other two techniques by achieving per class classification accuracy above 80% in 60 classes compared to 24 and 26 classes for bag-of-visual-words and CNN extracted features respectively. However, certain classes remain difficult to classify accurately such as classes in the same sub-body region due to inter-class similarity.","PeriodicalId":196731,"journal":{"name":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","volume":"311 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFRKM.2018.8464688","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
The rapid growth and spread of radiographic equipment in medical centres have resulted in a corresponding increase in the number of medical X-ray images produced. Therefore, more efficient and effective image classification techniques are required. Three different techniques for automatic classification of medical X-ray images were compared. A bag-of-visual-words model and a Convolutional Neural Network (CNN) were used to extract features from the images. The two groups of extracted feature vectors were each used to train a linear support vector machine classifier. Third, a fine-tuned CNN was used for end-to-end classification. A pre-trained CNN was used to overcome dataset limitations. The three techniques were evaluated on the ImageCLEF 2007 medical database. The database provides medical X-ray images in 116 categories. The experimental results showed that fine-tuned CNN outperforms the other two techniques by achieving per class classification accuracy above 80% in 60 classes compared to 24 and 26 classes for bag-of-visual-words and CNN extracted features respectively. However, certain classes remain difficult to classify accurately such as classes in the same sub-body region due to inter-class similarity.