Applying Transfer Learning Using DenseNet121 in Radiographic Image Classification: تطبيق التعلم بالنقل باستخدام شبكة DenseNet121 في تصنيف الصور الشعاعية
Nahla Saeed Saad Aldeen, Yosser Mohammad Marwan Atassi Nahla Saeed Saad Aldeen, Yosser Mohammad Marwan At
{"title":"Applying Transfer Learning Using DenseNet121 in Radiographic Image Classification: تطبيق التعلم بالنقل باستخدام شبكة DenseNet121 في تصنيف الصور الشعاعية","authors":"Nahla Saeed Saad Aldeen, Yosser Mohammad Marwan Atassi Nahla Saeed Saad Aldeen, Yosser Mohammad Marwan At","doi":"10.26389/ajsrp.l060521","DOIUrl":null,"url":null,"abstract":"The study aims to apply one of the fully connected convolutional neural networks, DenseNet121 network, to a data sample that includes a large group of radiographs through transfer learning technology. Radiography technology is a very important technique in the medical community to detect diseases and abnormalities that may be present, but the interpretation of these images may take a long time and it is subject to error by radiologists who are exposed to external practical factors (such as fatigue resulting from working for long hours, or exhaustion, or thinking about other life matters). To assist radiologists, we have worked on developing a diagnostic model with the help of a deep learning technique to classify radiographic images into two classes: (Normal and Abnormal images), by transferring the selected deep convolutional neural network between a large group of available networks that we studied on the basis of the regions that possibly abnormalities provided by the radiologists for the study sample. We also studied the feasibility of using the well-known VGG16 model on the same data sample and its performance through transfer learning technology and compared its results with the results of the DenseNet121 network. At the end of the research, we obtained a set of good results, which achieved a high diagnostic accuracy of 87.5% in some studied cases, using the DenseNet121 network model, which is considered satisfactory results in the case studied compared to the performance of other models. As for the VGG16 model, it did not give any of the satisfactory results in this field, the accuracy of the classification did not exceed 55% in most cases, and in only two cases it reached about 60% and 62%. The model presented during the research - DenseNet121 model - can be used in the diagnostic process and help in obtaining accurate results in terms of diagnostic results. As for the VGG16 model, it does not give satisfactory results according to the results also obtained during the research, so it is excluded in this type of applications.","PeriodicalId":15747,"journal":{"name":"Journal of engineering sciences and information technology","volume":"61 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of engineering sciences and information technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26389/ajsrp.l060521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The study aims to apply one of the fully connected convolutional neural networks, DenseNet121 network, to a data sample that includes a large group of radiographs through transfer learning technology. Radiography technology is a very important technique in the medical community to detect diseases and abnormalities that may be present, but the interpretation of these images may take a long time and it is subject to error by radiologists who are exposed to external practical factors (such as fatigue resulting from working for long hours, or exhaustion, or thinking about other life matters). To assist radiologists, we have worked on developing a diagnostic model with the help of a deep learning technique to classify radiographic images into two classes: (Normal and Abnormal images), by transferring the selected deep convolutional neural network between a large group of available networks that we studied on the basis of the regions that possibly abnormalities provided by the radiologists for the study sample. We also studied the feasibility of using the well-known VGG16 model on the same data sample and its performance through transfer learning technology and compared its results with the results of the DenseNet121 network. At the end of the research, we obtained a set of good results, which achieved a high diagnostic accuracy of 87.5% in some studied cases, using the DenseNet121 network model, which is considered satisfactory results in the case studied compared to the performance of other models. As for the VGG16 model, it did not give any of the satisfactory results in this field, the accuracy of the classification did not exceed 55% in most cases, and in only two cases it reached about 60% and 62%. The model presented during the research - DenseNet121 model - can be used in the diagnostic process and help in obtaining accurate results in terms of diagnostic results. As for the VGG16 model, it does not give satisfactory results according to the results also obtained during the research, so it is excluded in this type of applications.