P. L. Thon, J. Than, R. M. Kassim, A. Yunus, N. Noor, P. Then
{"title":"基于胸部x线图像的可解释COVID-19三级严重程度分类","authors":"P. L. Thon, J. Than, R. M. Kassim, A. Yunus, N. Noor, P. Then","doi":"10.1109/IECBES54088.2022.10079667","DOIUrl":null,"url":null,"abstract":"COVID-19 has been raging for almost three years ever since its first outbreak. It is without a doubt that it is a common human goal to end the pandemic and how it was before it started. Many efforts have been made to work toward this goal. In computer vision, works have been done to aid medical professionals into faster and more effective procedures when dealing with the disease. For example, disease diagnosis and severity prediction using chest imaging. At the same time, vision transformer is introduced and quickly stormed its way into one of the best deep learning models ever developed due to its ability to achieve good performance while being resources friendly. In this study, we investigated the performance of ViT on COVID19 severity classification using an open-source CXR images dataset. We applied different augmentation and transformation techniques to the dataset to see ViT’s ability to learn the features of the different severity levels of the disease. It is concluded that training ViT using the horizontally flipped images added to the original dataset gives the best overall accuracy of 0.862. To achieve explainability, we have also applied Grad-CAM to the best performing model to make sure it is looking at relevant region of the CXR image upon predicting the class label.","PeriodicalId":146681,"journal":{"name":"2022 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable COVID-19 Three Classes Severity Classification Using Chest X-Ray Images\",\"authors\":\"P. L. Thon, J. Than, R. M. Kassim, A. Yunus, N. Noor, P. Then\",\"doi\":\"10.1109/IECBES54088.2022.10079667\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"COVID-19 has been raging for almost three years ever since its first outbreak. It is without a doubt that it is a common human goal to end the pandemic and how it was before it started. Many efforts have been made to work toward this goal. In computer vision, works have been done to aid medical professionals into faster and more effective procedures when dealing with the disease. For example, disease diagnosis and severity prediction using chest imaging. At the same time, vision transformer is introduced and quickly stormed its way into one of the best deep learning models ever developed due to its ability to achieve good performance while being resources friendly. In this study, we investigated the performance of ViT on COVID19 severity classification using an open-source CXR images dataset. We applied different augmentation and transformation techniques to the dataset to see ViT’s ability to learn the features of the different severity levels of the disease. It is concluded that training ViT using the horizontally flipped images added to the original dataset gives the best overall accuracy of 0.862. To achieve explainability, we have also applied Grad-CAM to the best performing model to make sure it is looking at relevant region of the CXR image upon predicting the class label.\",\"PeriodicalId\":146681,\"journal\":{\"name\":\"2022 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES)\",\"volume\":\"85 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IECBES54088.2022.10079667\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IECBES54088.2022.10079667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable COVID-19 Three Classes Severity Classification Using Chest X-Ray Images
COVID-19 has been raging for almost three years ever since its first outbreak. It is without a doubt that it is a common human goal to end the pandemic and how it was before it started. Many efforts have been made to work toward this goal. In computer vision, works have been done to aid medical professionals into faster and more effective procedures when dealing with the disease. For example, disease diagnosis and severity prediction using chest imaging. At the same time, vision transformer is introduced and quickly stormed its way into one of the best deep learning models ever developed due to its ability to achieve good performance while being resources friendly. In this study, we investigated the performance of ViT on COVID19 severity classification using an open-source CXR images dataset. We applied different augmentation and transformation techniques to the dataset to see ViT’s ability to learn the features of the different severity levels of the disease. It is concluded that training ViT using the horizontally flipped images added to the original dataset gives the best overall accuracy of 0.862. To achieve explainability, we have also applied Grad-CAM to the best performing model to make sure it is looking at relevant region of the CXR image upon predicting the class label.