P. L. Thon, J. Than, Norliza M. Noor, Jun Han, Patrick Then
{"title":"Investigation of ConViT on COVID-19 Lung Image Classification and the Effects of Image Resolution and Number of Attention Heads","authors":"P. L. Thon, J. Than, Norliza M. Noor, Jun Han, Patrick Then","doi":"10.30880/ijie.2023.15.03.005","DOIUrl":null,"url":null,"abstract":"COVID-19 has beenone of the popular foci in the research community since its first outbreak in China, 2019. Radiological patternssuch as ground glass opacity (GGO) andconsolidations are often found inCT scan images ofmoderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Anotherpotential method is the use ofvisiontransformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance.Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the modelwith 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. Byusing 128x128 image pixels resolution,training using 16 attention heads, the ConViT modelhas achieved an accuracy of98.01%,sensitivity of90.83%, specificity of99.69%, positive predictive value (PPV) of95.58%, negative predictive value (NPV) of97.89%and F1-score of94.55%.The model has also achieved improvedperformance over other recent studiesthat usedthe same dataset.In conclusion, this study has shown that theConViTmodel can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients.","PeriodicalId":14189,"journal":{"name":"International Journal of Integrated Engineering","volume":" ","pages":""},"PeriodicalIF":0.4000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Integrated Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30880/ijie.2023.15.03.005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
COVID-19 has beenone of the popular foci in the research community since its first outbreak in China, 2019. Radiological patternssuch as ground glass opacity (GGO) andconsolidations are often found inCT scan images ofmoderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Anotherpotential method is the use ofvisiontransformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance.Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the modelwith 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. Byusing 128x128 image pixels resolution,training using 16 attention heads, the ConViT modelhas achieved an accuracy of98.01%,sensitivity of90.83%, specificity of99.69%, positive predictive value (PPV) of95.58%, negative predictive value (NPV) of97.89%and F1-score of94.55%.The model has also achieved improvedperformance over other recent studiesthat usedthe same dataset.In conclusion, this study has shown that theConViTmodel can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients.
期刊介绍:
The International Journal of Integrated Engineering (IJIE) is a single blind peer reviewed journal which publishes 3 times a year since 2009. The journal is dedicated to various issues focusing on 3 different fields which are:- Civil and Environmental Engineering. Original contributions for civil and environmental engineering related practices will be publishing under this category and as the nucleus of the journal contents. The journal publishes a wide range of research and application papers which describe laboratory and numerical investigations or report on full scale projects. Electrical and Electronic Engineering. It stands as a international medium for the publication of original papers concerned with the electrical and electronic engineering. The journal aims to present to the international community important results of work in this field, whether in the form of research, development, application or design. Mechanical, Materials and Manufacturing Engineering. It is a platform for the publication and dissemination of original work which contributes to the understanding of the main disciplines underpinning the mechanical, materials and manufacturing engineering. Original contributions giving insight into engineering practices related to mechanical, materials and manufacturing engineering form the core of the journal contents.