J. Than, P. L. Thon, O. M. Rijal, R. M. Kassim, A. Yunus, N. Noor, P. Then
{"title":"COVID-19视觉变压器(ViT)斑块大小与病变肺分类的初步研究","authors":"J. Than, P. L. Thon, O. M. Rijal, R. M. Kassim, A. Yunus, N. Noor, P. Then","doi":"10.1109/nbec53282.2021.9618751","DOIUrl":null,"url":null,"abstract":"COVID-19 and lung diseases have been the major focus of research currently due to the pandemic’s reach and effect. Deep Learning (DL) is playing a large role today in various fields from disease classification to drug response identification. The conventional DL method used for images is the Convolutional Neural Network (CNN). A potential method that will replace the usage of CNNs is Transformer specifically Vision Transformers (ViT). This study is a preliminary exploration to determine the performance of using ViT on diseased lungs, COVID-19 infected lungs, and normal lungs. This study was performed on two datasets. The first dataset was a publicly accessible dataset from Iran that has a large cohort of patients. The second dataset was a Malaysian dataset. These images were utilized to verify the usage of ViT and its effectiveness. Images were segregated into several sized patches (16x16, 32x32, 64x64, 128x128, 256x256) pixels. To determine the performance of ViT method, performance metrics of accuracy, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and F1-score. From the results of this study, ViT is a promising method with a peak accuracy of 95.36%.","PeriodicalId":297399,"journal":{"name":"2021 IEEE National Biomedical Engineering Conference (NBEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Preliminary Study on Patch Sizes in Vision Transformers (ViT) for COVID-19 and Diseased Lungs Classification\",\"authors\":\"J. Than, P. L. Thon, O. M. Rijal, R. M. Kassim, A. Yunus, N. Noor, P. Then\",\"doi\":\"10.1109/nbec53282.2021.9618751\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"COVID-19 and lung diseases have been the major focus of research currently due to the pandemic’s reach and effect. Deep Learning (DL) is playing a large role today in various fields from disease classification to drug response identification. The conventional DL method used for images is the Convolutional Neural Network (CNN). A potential method that will replace the usage of CNNs is Transformer specifically Vision Transformers (ViT). This study is a preliminary exploration to determine the performance of using ViT on diseased lungs, COVID-19 infected lungs, and normal lungs. This study was performed on two datasets. The first dataset was a publicly accessible dataset from Iran that has a large cohort of patients. The second dataset was a Malaysian dataset. These images were utilized to verify the usage of ViT and its effectiveness. Images were segregated into several sized patches (16x16, 32x32, 64x64, 128x128, 256x256) pixels. To determine the performance of ViT method, performance metrics of accuracy, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and F1-score. From the results of this study, ViT is a promising method with a peak accuracy of 95.36%.\",\"PeriodicalId\":297399,\"journal\":{\"name\":\"2021 IEEE National Biomedical Engineering Conference (NBEC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE National Biomedical Engineering Conference (NBEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/nbec53282.2021.9618751\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE National Biomedical Engineering Conference (NBEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nbec53282.2021.9618751","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Preliminary Study on Patch Sizes in Vision Transformers (ViT) for COVID-19 and Diseased Lungs Classification
COVID-19 and lung diseases have been the major focus of research currently due to the pandemic’s reach and effect. Deep Learning (DL) is playing a large role today in various fields from disease classification to drug response identification. The conventional DL method used for images is the Convolutional Neural Network (CNN). A potential method that will replace the usage of CNNs is Transformer specifically Vision Transformers (ViT). This study is a preliminary exploration to determine the performance of using ViT on diseased lungs, COVID-19 infected lungs, and normal lungs. This study was performed on two datasets. The first dataset was a publicly accessible dataset from Iran that has a large cohort of patients. The second dataset was a Malaysian dataset. These images were utilized to verify the usage of ViT and its effectiveness. Images were segregated into several sized patches (16x16, 32x32, 64x64, 128x128, 256x256) pixels. To determine the performance of ViT method, performance metrics of accuracy, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and F1-score. From the results of this study, ViT is a promising method with a peak accuracy of 95.36%.