GAN-based Spatial Transformation Adversarial Method for Disease Classification on CXR Photographs by Smartphones

Chak Fong Chong, Xu Yang, Wei Ke, Yapeng Wang
{"title":"GAN-based Spatial Transformation Adversarial Method for Disease Classification on CXR Photographs by Smartphones","authors":"Chak Fong Chong, Xu Yang, Wei Ke, Yapeng Wang","doi":"10.1109/DICTA52665.2021.9647192","DOIUrl":null,"url":null,"abstract":"Deep learning has been successfully applied on Chest X-ray (CXR) images for disease classification. To support remote medical services (e.g., online diagnosis services), such systems can be deployed on smartphones by patients or doctors to take CXR photographs using the cameras on smartphones. However, photograph introduces visual artifacts such as blur, noises, light reflection, perspective transformation, moiré pattern, etc. plus unwanted background. Therefore, the classification accuracy of well-trained CNN models performed on the CXR photographs experiences drop significantly. Such challenge has not been solved properly in the literature. In this paper, we have compared various traditional image preprocessing methods on CXR photographs, including spatial transformation, background hiding, and various filtering methods. The combination of these methods can almost eliminate the negative impact of visual artifacts on the evaluation of 3 different single CNN models (Xception, DenseNet-121, Inception-v3), only 0.0018 AUC drop observed. However, such methods need user manually process the CXR photographs, which is inconvenient. Therefore, we have proposed a novel Generative Adversarial Network-based spatial transformation adversarial method (GAN-STAM) which can automatically transform the CXR region to the center and enlarge the CXR region in each CXR photograph, the classification accuracy has been significantly improved on CXR photographs from 0.8009 to 0.8653.","PeriodicalId":424950,"journal":{"name":"2021 Digital Image Computing: Techniques and Applications (DICTA)","volume":"117 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA52665.2021.9647192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Deep learning has been successfully applied on Chest X-ray (CXR) images for disease classification. To support remote medical services (e.g., online diagnosis services), such systems can be deployed on smartphones by patients or doctors to take CXR photographs using the cameras on smartphones. However, photograph introduces visual artifacts such as blur, noises, light reflection, perspective transformation, moiré pattern, etc. plus unwanted background. Therefore, the classification accuracy of well-trained CNN models performed on the CXR photographs experiences drop significantly. Such challenge has not been solved properly in the literature. In this paper, we have compared various traditional image preprocessing methods on CXR photographs, including spatial transformation, background hiding, and various filtering methods. The combination of these methods can almost eliminate the negative impact of visual artifacts on the evaluation of 3 different single CNN models (Xception, DenseNet-121, Inception-v3), only 0.0018 AUC drop observed. However, such methods need user manually process the CXR photographs, which is inconvenient. Therefore, we have proposed a novel Generative Adversarial Network-based spatial transformation adversarial method (GAN-STAM) which can automatically transform the CXR region to the center and enlarge the CXR region in each CXR photograph, the classification accuracy has been significantly improved on CXR photographs from 0.8009 to 0.8653.
基于gan的智能手机CXR照片疾病分类空间变换对抗方法
深度学习已成功应用于胸部x射线(CXR)图像的疾病分类。为了支持远程医疗服务(例如,在线诊断服务),患者或医生可以在智能手机上部署这种系统,使用智能手机上的相机拍摄CXR照片。然而,照片引入了视觉伪影,如模糊,噪音,光反射,透视变换,波纹图案等,加上不必要的背景。因此,经过良好训练的CNN模型在CXR照片经验上的分类准确率明显下降。这一挑战在文献中还没有得到很好的解决。在本文中,我们比较了各种传统的图像预处理方法对CXR照片,包括空间变换,背景隐藏和各种滤波方法。这些方法的组合几乎可以消除视觉伪影对3种不同的单一CNN模型(Xception, DenseNet-121, Inception-v3)评估的负面影响,仅观察到0.0018 AUC下降。但是,这种方法需要用户手动处理CXR照片,不方便。因此,我们提出了一种新的基于生成对抗网络的空间变换对抗方法(GAN-STAM),该方法可以自动将CXR区域转换到中心,并在每个CXR照片中扩大CXR区域,使CXR照片的分类精度从0.8009提高到0.8653。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信