A. Victor Ikechukwu, S. Murali, R. Deepu, R.C. Shivamurthy
{"title":"ResNet-50 vs VGG-19 vs Training from Scratch:胸部x线图像肺炎分割分类的比较分析","authors":"A. Victor Ikechukwu, S. Murali, R. Deepu, R.C. Shivamurthy","doi":"10.1016/j.gltp.2021.08.027","DOIUrl":null,"url":null,"abstract":"<div><p>In medical imaging, segmentation plays a vital role towards the interpretation of X-ray images where salient features are extracted with the help of image segmentation. Without undergoing surgery, clinicians employ various modalities ranging from X-rays and CT-Scans to ultrasonography, and other imaging techniques to visualise and examine interior human body organ and structures. To ensure appropriate convergence, training a deep convolutional neural network (CNN) from scratch is tough since it requires more computational time, a big amount of labelled training data and a considerable degree of experience. Fine-tuning a CNN that has been pre-trained using, for instance, a huge set of labelled medical datasets, is a viable alternative. In this paper, a comparative study was done using pre-trained models such as VGG-19 and ResNet-50 as against training from scratch. To reduce overfitting, data augmentation and dropout regularization was used. With a recall of 92.03%, our analysis showed that the pre-trained models with proper finetuning was comparable with Iyke-Net, a CNN trained from scratch.</p></div>","PeriodicalId":100588,"journal":{"name":"Global Transitions Proceedings","volume":"2 2","pages":"Pages 375-381"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gltp.2021.08.027","citationCount":"64","resultStr":"{\"title\":\"ResNet-50 vs VGG-19 vs training from scratch: A comparative analysis of the segmentation and classification of Pneumonia from chest X-ray images\",\"authors\":\"A. Victor Ikechukwu, S. Murali, R. Deepu, R.C. Shivamurthy\",\"doi\":\"10.1016/j.gltp.2021.08.027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In medical imaging, segmentation plays a vital role towards the interpretation of X-ray images where salient features are extracted with the help of image segmentation. Without undergoing surgery, clinicians employ various modalities ranging from X-rays and CT-Scans to ultrasonography, and other imaging techniques to visualise and examine interior human body organ and structures. To ensure appropriate convergence, training a deep convolutional neural network (CNN) from scratch is tough since it requires more computational time, a big amount of labelled training data and a considerable degree of experience. Fine-tuning a CNN that has been pre-trained using, for instance, a huge set of labelled medical datasets, is a viable alternative. In this paper, a comparative study was done using pre-trained models such as VGG-19 and ResNet-50 as against training from scratch. To reduce overfitting, data augmentation and dropout regularization was used. With a recall of 92.03%, our analysis showed that the pre-trained models with proper finetuning was comparable with Iyke-Net, a CNN trained from scratch.</p></div>\",\"PeriodicalId\":100588,\"journal\":{\"name\":\"Global Transitions Proceedings\",\"volume\":\"2 2\",\"pages\":\"Pages 375-381\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/j.gltp.2021.08.027\",\"citationCount\":\"64\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Global Transitions Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666285X21000558\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Global Transitions Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666285X21000558","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 64
摘要
在医学成像中,分割对x射线图像的解释起着至关重要的作用,在图像分割的帮助下提取显著特征。在不进行手术的情况下,临床医生使用各种方式,从x射线和ct扫描到超声检查,以及其他成像技术来可视化和检查人体内部器官和结构。为了确保适当的收敛性,从头开始训练深度卷积神经网络(CNN)是困难的,因为它需要更多的计算时间,大量的标记训练数据和相当程度的经验。对预先训练好的CNN进行微调是一种可行的选择,例如,使用大量标记的医疗数据集。本文使用VGG-19和ResNet-50等预训练模型与从头开始训练进行了对比研究。为了减少过拟合,使用了数据增强和dropout正则化。我们的分析表明,经过适当微调的预训练模型的召回率为92.03%,与从头训练的CNN ike - net相当。
ResNet-50 vs VGG-19 vs training from scratch: A comparative analysis of the segmentation and classification of Pneumonia from chest X-ray images
In medical imaging, segmentation plays a vital role towards the interpretation of X-ray images where salient features are extracted with the help of image segmentation. Without undergoing surgery, clinicians employ various modalities ranging from X-rays and CT-Scans to ultrasonography, and other imaging techniques to visualise and examine interior human body organ and structures. To ensure appropriate convergence, training a deep convolutional neural network (CNN) from scratch is tough since it requires more computational time, a big amount of labelled training data and a considerable degree of experience. Fine-tuning a CNN that has been pre-trained using, for instance, a huge set of labelled medical datasets, is a viable alternative. In this paper, a comparative study was done using pre-trained models such as VGG-19 and ResNet-50 as against training from scratch. To reduce overfitting, data augmentation and dropout regularization was used. With a recall of 92.03%, our analysis showed that the pre-trained models with proper finetuning was comparable with Iyke-Net, a CNN trained from scratch.