Muhammad Muzzammil Auzine, Preeti Bissoonauth-Daiboo, Maleika Heenaye-Mamode Khan, S. Baichoo, Xiaohong W. Gao, Nuzhah Gooda Sahib
{"title":"Classification of artefacts in endoscopic images using deep neural network","authors":"Muhammad Muzzammil Auzine, Preeti Bissoonauth-Daiboo, Maleika Heenaye-Mamode Khan, S. Baichoo, Xiaohong W. Gao, Nuzhah Gooda Sahib","doi":"10.1109/NextComp55567.2022.9932202","DOIUrl":null,"url":null,"abstract":"Early cancer diagnosis by endoscopy is a challenging and time challenging process in the medical field thus requiring endoscopists to first acquire substantial experience and good technique. In addition, the presence of artefacts like saturation, bubbles and blood among others during the endoscopic process, are often misinterpreted as lesions leading to the wrong diagnosis and treatment. Lately, we have witnessed how the intervention of medical imaging with convolution neural networks (CNN) have brought promising results in medical applications. Therefore, we have applied deep neural networks to detect and classify artefacts, which interfere with the diagnosis of gastric cancer. Training CNN models from scratch require considerable number of labelled dataset, which is not usually available in the medical field. Thus, we have performed data augmentation on the EAD 2019 and Kvasir-V2 dataset leading to a total of 9852 images for six classes of artefacts. We then applied transfer learning using three pretrained neural network architectures namely: InceptionV3, InceptionResNetV2 and VGG16. The weights of the models are updated accordingly. The models are enhanced using Adam Optimisation and by varying the learning rates. We achieved a testing accuracy of 68.15 % with the original dataset trained by the InceptionResnetV2 model and 77.65% with the augmented dataset trained by the InceptionV3 models. Our experiments show the effectiveness of using CNN to detect artifacts during endoscopic procedures.","PeriodicalId":422085,"journal":{"name":"2022 3rd International Conference on Next Generation Computing Applications (NextComp)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 3rd International Conference on Next Generation Computing Applications (NextComp)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NextComp55567.2022.9932202","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Early cancer diagnosis by endoscopy is a challenging and time challenging process in the medical field thus requiring endoscopists to first acquire substantial experience and good technique. In addition, the presence of artefacts like saturation, bubbles and blood among others during the endoscopic process, are often misinterpreted as lesions leading to the wrong diagnosis and treatment. Lately, we have witnessed how the intervention of medical imaging with convolution neural networks (CNN) have brought promising results in medical applications. Therefore, we have applied deep neural networks to detect and classify artefacts, which interfere with the diagnosis of gastric cancer. Training CNN models from scratch require considerable number of labelled dataset, which is not usually available in the medical field. Thus, we have performed data augmentation on the EAD 2019 and Kvasir-V2 dataset leading to a total of 9852 images for six classes of artefacts. We then applied transfer learning using three pretrained neural network architectures namely: InceptionV3, InceptionResNetV2 and VGG16. The weights of the models are updated accordingly. The models are enhanced using Adam Optimisation and by varying the learning rates. We achieved a testing accuracy of 68.15 % with the original dataset trained by the InceptionResnetV2 model and 77.65% with the augmented dataset trained by the InceptionV3 models. Our experiments show the effectiveness of using CNN to detect artifacts during endoscopic procedures.