International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy SDR2Tr-GAN:基于带有 SDR2 模块和变压器优化策略的 GAN 的新型医学图像融合管道
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-08 DOI: 10.1002/ima.23208
Ying Cheng, Xianjin Fang, Zhiri Tang, Zekuan Yu, Linlin Sun, Li Zhu
{"title":"SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy","authors":"Ying Cheng,&nbsp;Xianjin Fang,&nbsp;Zhiri Tang,&nbsp;Zekuan Yu,&nbsp;Linlin Sun,&nbsp;Li Zhu","doi":"10.1002/ima.23208","DOIUrl":"https://doi.org/10.1002/ima.23208","url":null,"abstract":"<div>\u0000 \u0000 <p>In clinical practice, radiologists diagnose brain tumors with the help of different magnetic resonance imaging (MRI) sequences and judge the type and grade of brain tumors. It is hard to realize the brain tumor computer-aided diagnosis system only with a single MRI sequence. However, the existing multiple MRI sequence fusion methods have limitations in the enhancement of tumor details. To improve fusion details of multi-modality MRI images, a novel conditional generative adversarial fusion network based on three discriminators and a Staggered Dense Residual2 (SDR2) module, named SDR2Tr-GAN, was proposed in this paper. In the SDR2Tr-GAN network pipeline, the generator consists of an encoder, decoder, and fusion strategy that can enhance the feature representation. SDR2 module is developed with Res2Net into the encoder to extract multi-scale features. In addition, a Multi-Head Spatial/Channel Attention Transformer, as a fusion strategy to strengthen the long-range dependencies of global context information, is integrated into our pipeline. A Mask-based constraint as a novel fusion optimization mechanism was designed, focusing on enhancing salient feature details. The Mask-based constraint utilizes the segmentation mask obtained by the pre-trained Unet and Ground Truth to optimize the training process. Meanwhile, MI and SSIM loss jointly improve the visual perception of images. Extensive experiments were conducted on the public BraTS2021 dataset. The visual and quantitative results demonstrate that the proposed method can simultaneously enhance both global image quality and local texture details in multi-modality MRI images. Besides, our SDR2Tr-GAN outperforms the other state-of-the-art fusion methods regarding subjective and objective evaluation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement 荧光显微镜图像增强的混合小波-深度学习框架
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-11-07 DOI: 10.1002/ima.23212
Francesco Branciforti, Maura Maggiore, Kristen M. Meiburger, Tania Pannellini, Massimo Salvi
{"title":"Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement","authors":"Francesco Branciforti,&nbsp;Maura Maggiore,&nbsp;Kristen M. Meiburger,&nbsp;Tania Pannellini,&nbsp;Massimo Salvi","doi":"10.1002/ima.23212","DOIUrl":"https://doi.org/10.1002/ima.23212","url":null,"abstract":"<p>Fluorescence microscopy is a powerful tool for visualizing cellular structures, but it faces challenges such as noise, low contrast, and autofluorescence that can hinder accurate image analysis. To address these limitations, we propose a novel hybrid image enhancement method that combines wavelet-based denoising, linear contrast enhancement, and convolutional neural network-based autofluorescence correction. Our automated method employs Haar wavelet transform for noise reduction and a series of adaptive linear transformations for pixel value adjustment, effectively enhancing image quality while preserving crucial details. Furthermore, we introduce a semantic segmentation approach using CNNs to identify and correct autofluorescence in cellular aggregates, enabling targeted mitigation of unwanted background signals. We validate our method using quantitative metrics, such as signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), demonstrating superior performance compared to both mathematical and deep learning-based techniques. Our method achieves an average SNR improvement of 8.5 dB and a PSNR increase of 4.2 dB compared with the original images, outperforming state-of-the-art methods such as BM3D and CLAHE. Extensive testing on diverse datasets, including publicly available human-derived cardiosphere and fluorescence microscopy images of bovine endothelial cells stained for mitochondria and actin filaments, showcases the flexibility and robustness of our approach across various acquisition conditions and artifacts. The proposed method significantly improves fluorescence microscopy image quality, facilitating more accurate and reliable analysis of cellular structures and processes, with potential applications in biomedical research and clinical diagnostics.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet 基于CAPA-ResUNet的胸部CT图像COVID-19肺部感染分割
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2022-10-12 DOI: 10.1002/ima.22819
Lu Ma, Shuni Song, Liting Guo, Wenjun Tan, Lisheng Xu
{"title":"COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet","authors":"Lu Ma,&nbsp;Shuni Song,&nbsp;Liting Guo,&nbsp;Wenjun Tan,&nbsp;Lisheng Xu","doi":"10.1002/ima.22819","DOIUrl":"10.1002/ima.22819","url":null,"abstract":"<p>Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge—2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 1","pages":"6-17"},"PeriodicalIF":3.3,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9874448/pdf/IMA-33-.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10583245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A deep learning approach for classification of COVID and pneumonia using DenseNet-201 基于DenseNet-201的COVID和肺炎分类的深度学习方法
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2022-09-29 DOI: 10.1002/ima.22812
Harshal A. Sanghvi, Riki H. Patel, Ankur Agarwal, Shailesh Gupta, Vivek Sawhney, Abhijit S. Pandya
{"title":"A deep learning approach for classification of COVID and pneumonia using DenseNet-201","authors":"Harshal A. Sanghvi,&nbsp;Riki H. Patel,&nbsp;Ankur Agarwal,&nbsp;Shailesh Gupta,&nbsp;Vivek Sawhney,&nbsp;Abhijit S. Pandya","doi":"10.1002/ima.22812","DOIUrl":"10.1002/ima.22812","url":null,"abstract":"<p>In the present paper, our model consists of deep learning approach: DenseNet201 for detection of COVID and Pneumonia using the Chest X-ray Images. The model is a framework consisting of the modeling software which assists in Health Insurance Portability and Accountability Act Compliance which protects and secures the Protected Health Information . The need of the proposed framework in medical facilities shall give the feedback to the radiologist for detecting COVID and pneumonia though the transfer learning methods. A Graphical User Interface tool allows the technician to upload the chest X-ray Image. The software then uploads chest X-ray radiograph (CXR) to the developed detection model for the detection. Once the radiographs are processed, the radiologist shall receive the Classification of the disease which further aids them to verify the similar CXR Images and draw the conclusion. Our model consists of the dataset from Kaggle and if we observe the results, we get an accuracy of 99.1%, sensitivity of 98.5%, and specificity of 98.95%. The proposed Bio-Medical Innovation is a user-ready framework which assists the medical providers in providing the patients with the best-suited medication regimen by looking into the previous CXR Images and confirming the results. There is a motivation to design more such applications for Medical Image Analysis in the future to serve the community and improve the patient care.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"33 1","pages":"18-38"},"PeriodicalIF":3.3,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9537800/pdf/IMA-9999-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33518022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Application of a novel T1 retrospective quantification using internal references (T1-REQUIRE) algorithm to derive quantitative T1 relaxation maps of the brain 应用一种新颖的T1回顾性量化使用内部参考(T1- require)算法来获得定量的大脑T1松弛图
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2022-06-13 DOI: 10.1002/ima.22768
Adam Hasse, Julian Bertini, Sean Foxley, Yong Jeong, Adil Javed, Timothy J. Carroll
{"title":"Application of a novel T1 retrospective quantification using internal references (T1-REQUIRE) algorithm to derive quantitative T1 relaxation maps of the brain","authors":"Adam Hasse,&nbsp;Julian Bertini,&nbsp;Sean Foxley,&nbsp;Yong Jeong,&nbsp;Adil Javed,&nbsp;Timothy J. Carroll","doi":"10.1002/ima.22768","DOIUrl":"10.1002/ima.22768","url":null,"abstract":"<p>Most MRI sequences used clinically are qualitative or weighted. While such images provide useful information for clinicians to diagnose and monitor disease progression, they lack the ability to quantify tissue damage for more objective assessment. In this study, an algorithm referred to as the T1-REQUIRE is presented as a proof-of-concept which uses nonlinear transformations to retrospectively estimate T1 relaxation times in the brain using T1-weighted MRIs, the appropriate signal equation, and internal, healthy tissues as references. T1-REQUIRE was applied to two T1-weighted MR sequences, a spin-echo and a MPRAGE, and validated with a reference standard T1 mapping algorithm in vivo. In addition, a multiscanner study was run using MPRAGE images to determine the effectiveness of T1-REQUIRE in conforming the data from different scanners into a more uniform way of analyzing T1-relaxation maps. The T1-REQUIRE algorithm shows good agreement with the reference standard (Lin's concordance correlation coefficients of 0.884 for the spin-echo and 0.838 for the MPRAGE) and with each other (Lin's concordance correlation coefficient of 0.887). The interscanner studies showed improved alignment of cumulative distribution functions after T1-REQUIRE was performed. T1-REQUIRE was validated with a reference standard and shown to be an effective estimate of T1 over a clinically relevant range of T1 values. In addition, T1-REQUIRE showed excellent data conformity across different scanners, providing evidence that T1-REQUIRE could be a useful addition to big data pipelines.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"32 6","pages":"1903-1915"},"PeriodicalIF":3.3,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9796586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10468644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVSeg-NET: A deep convolution neural network for COVID-19 lung CT image segmentation COVSeg-NET:用于COVID-19肺部CT图像分割的深度卷积神经网络
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2021-06-04 DOI: 10.1002/ima.22611
XiaoQing Zhang, GuangYu Wang, Shu-Guang Zhao
{"title":"COVSeg-NET: A deep convolution neural network for COVID-19 lung CT image segmentation","authors":"XiaoQing Zhang,&nbsp;GuangYu Wang,&nbsp;Shu-Guang Zhao","doi":"10.1002/ima.22611","DOIUrl":"10.1002/ima.22611","url":null,"abstract":"<p>COVID-19 is a new type of respiratory infectious disease that poses a serious threat to the survival of human beings all over the world. Using artificial intelligence technology to analyze lung images of COVID-19 patients can achieve rapid and effective detection. This study proposes a COVSeg-NET model that can accurately segment ground glass opaque lesions in COVID-19 lung CT images. The COVSeg-NET model is based on the fully convolutional neural network model structure, which mainly includes convolutional layer, nonlinear unit activation function, maximum pooling layer, batch normalization layer, merge layer, flattening layer, sigmoid layer, and so forth. Through experiments and evaluation results, it can be seen that the dice coefficient, sensitivity, and specificity of the COVSeg-NET model are 0.561, 0.447, and 0.996 respectively, which are more advanced than other deep learning methods. The COVSeg-NET model can use a smaller training set and shorter test time to obtain better segmentation results.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"31 3","pages":"1071-1086"},"PeriodicalIF":3.3,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ima.22611","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39154109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep convolution neural networks to differentiate between COVID-19 and other pulmonary abnormalities on chest radiographs: Evaluation using internal and external datasets 用于区分COVID-19和胸片上其他肺部异常的深度卷积神经网络:使用内部和外部数据集进行评估
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2021-05-13 DOI: 10.1002/ima.22595
Yongwon Cho, Sung Ho Hwang, Yu-Whan Oh, Byung-Joo Ham, Min Ju Kim, Beom Jin Park
{"title":"Deep convolution neural networks to differentiate between COVID-19 and other pulmonary abnormalities on chest radiographs: Evaluation using internal and external datasets","authors":"Yongwon Cho,&nbsp;Sung Ho Hwang,&nbsp;Yu-Whan Oh,&nbsp;Byung-Joo Ham,&nbsp;Min Ju Kim,&nbsp;Beom Jin Park","doi":"10.1002/ima.22595","DOIUrl":"10.1002/ima.22595","url":null,"abstract":"<p>We aimed to evaluate the performance of convolutional neural networks (CNNs) in the classification of coronavirus disease 2019 (COVID-19) disease using normal, pneumonia, and COVID-19 chest radiographs (CXRs). First, we collected 9194 CXRs from open datasets and 58 from the Korea University Anam Hospital (KUAH). The number of normal, pneumonia, and COVID-19 CXRs were 4580, 3884, and 730, respectively. The CXRs obtained from the open dataset were randomly assigned to the training, tuning, and test sets in a 70:10:20 ratio. For external validation, the KUAH (20 normal, 20 pneumonia, and 18 COVID-19) dataset, verified by radiologists using computed tomography, was used. Subsequently, transfer learning was conducted using DenseNet169, InceptionResNetV2, and Xception to identify COVID-19 using open datasets (internal) and the KUAH dataset (external) with histogram matching. Gradient-weighted class activation mapping was used for the visualization of abnormal patterns in CXRs. The average AUC and accuracy of the multiscale and mixed-COVID-19Net using three CNNs over five folds were (0.99 ± 0.01 and 92.94% ± 0.45%), (0.99 ± 0.01 and 93.12% ± 0.23%), and (0.99 ± 0.01 and 93.57% ± 0.29%), respectively, using the open datasets (internal). Furthermore, these values were (0.75 and 74.14%), (0.72 and 68.97%), and (0.77 and 68.97%), respectively, for the best model among the fivefold cross-validation with the KUAH dataset (external) using domain adaptation. The various state-of-the-art models trained on open datasets show satisfactory performance for clinical interpretation. Furthermore, the domain adaptation for external datasets was found to be important for detecting COVID-19 as well as other diseases.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"31 3","pages":"1087-1104"},"PeriodicalIF":3.3,"publicationDate":"2021-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ima.22595","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39080492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Longitudinal evaluation for COVID-19 chest CT disease progression based on Tchebichef moments 基于chebichef矩的COVID-19胸部CT疾病进展的纵向评价
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2021-04-28 DOI: 10.1002/ima.22583
Lu Tang, Chuangeng Tian, Yankai Meng, Kai Xu
{"title":"Longitudinal evaluation for COVID-19 chest CT disease progression based on Tchebichef moments","authors":"Lu Tang,&nbsp;Chuangeng Tian,&nbsp;Yankai Meng,&nbsp;Kai Xu","doi":"10.1002/ima.22583","DOIUrl":"10.1002/ima.22583","url":null,"abstract":"<p>Blur is a key property in the perception of COVID-19 computed tomography (CT) image manifestations. Typically, blur causes edge extension, which brings shape changes in infection regions. Tchebichef moments (TM) have been verified efficiently in shape representation. Intuitively, disease progression of same patient over time during the treatment is represented as different blur degrees of infection regions, since different blur degrees cause the magnitudes change of TM on infection regions image, blur of infection regions can be captured by TM. With the above observation, a longitudinal objective quantitative evaluation method for COVID-19 disease progression based on TM is proposed. COVID-19 disease progression CT image database (COVID-19 DPID) is built to employ radiologist subjective ratings and manual contouring, which can test and compare disease progression on the CT images acquired from the same patient over time. Then the images are preprocessed, including lung automatic segmentation, longitudinal registration, slice fusion, and a fused slice image with region of interest (ROI) is obtained. Next, the gradient of a fused ROI image is calculated to represent the shape. The gradient image of fused ROI is separated into same size blocks, a block energy is calculated as quadratic sum of non-direct current moment values. Finally, the objective assessment score is obtained by TM energy-normalized applying block variances. We have conducted experiment on COVID-19 DPID and the experiment results indicate that our proposed metric supplies a satisfactory correlation with subjective evaluation scores, demonstrating effectiveness in the quantitative evaluation for COVID-19 disease progression.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"31 3","pages":"1120-1127"},"PeriodicalIF":3.3,"publicationDate":"2021-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ima.22583","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39080491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Convolutional capsule network for COVID-19 detection using radiography images 基于x线图像的卷积胶囊网络COVID-19检测
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2021-03-02 DOI: 10.1002/ima.22566
Shamik Tiwari, Anurag Jain
{"title":"Convolutional capsule network for COVID-19 detection using radiography images","authors":"Shamik Tiwari,&nbsp;Anurag Jain","doi":"10.1002/ima.22566","DOIUrl":"10.1002/ima.22566","url":null,"abstract":"<p>Novel corona virus COVID-19 has spread rapidly all over the world. Due to increasing COVID-19 cases, there is a dearth of testing kits. Therefore, there is a severe need for an automatic recognition system as a solution to reduce the spreading of the COVID-19 virus. This work offers a decision support system based on the X-ray image to diagnose the presence of the COVID-19 virus. A deep learning-based computer-aided decision support system will be capable to differentiate between COVID-19 and pneumonia. Recently, convolutional neural network (CNN) is designed for the diagnosis of COVID-19 patients through <i>chest radiography</i> (or <i>chest X-ray</i>, CXR) images. However, due to the usage of CNN, there are some limitations with these decision support systems. These systems suffer with the problem of view-invariance and loss of information due to down-sampling. In this paper, the capsule network (CapsNet)-based system named visual geometry group capsule network (VGG-CapsNet) for the diagnosis of COVID-19 is proposed. Due to the usage of capsule network (CapsNet), the authors have succeeded in removing the drawbacks found in the CNN-based decision support system for the detection of COVID-19. Through simulation results, it is found that VGG-CapsNet has performed better than the CNN-CapsNet model for the diagnosis of COVID-19. The proposed VGG-CapsNet-based system has shown 97% accuracy for COVID-19 versus non-COVID-19 classification, and 92% accuracy for COVID-19 versus normal versus viral pneumonia classification. Proposed VGG-CapsNet-based system available at https://github.com/shamiktiwari/COVID19_Xray can be used to detect the existence of COVID-19 virus in the human body through chest radiographic images.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"31 2","pages":"525-539"},"PeriodicalIF":3.3,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ima.22566","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25564586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
An automated and fast system to identify COVID-19 from X-ray radiograph of the chest using image processing and machine learning 利用图像处理和机器学习从胸部x光片中识别COVID-19的自动化快速系统
IF 3.3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2021-03-01 DOI: 10.1002/ima.22564
Murtaza Ali Khan
{"title":"An automated and fast system to identify COVID-19 from X-ray radiograph of the chest using image processing and machine learning","authors":"Murtaza Ali Khan","doi":"10.1002/ima.22564","DOIUrl":"10.1002/ima.22564","url":null,"abstract":"<p>A type of coronavirus disease called COVID-19 is spreading all over the globe. Researchers and scientists are endeavoring to find new and effective methods to diagnose and treat this disease. This article presents an automated and fast system that identifies COVID-19 from X-ray radiographs of the chest using image processing and machine learning algorithms. Initially, the system extracts the feature descriptors from the radiographs of both healthy and COVID-19 affected patients using the speeded up robust features algorithm. Then, visual vocabulary is built by reducing the number of feature descriptors via quantization of feature space using the K-means clustering algorithm. The visual vocabulary train the support vector machine (SVM) classifier. During testing, an X-ray radiograph's visual vocabulary is sent to the trained SVM classifier to detect the absence or presence of COVID-19. The study used the dataset of 340 X-ray radiographs, 170 images of each Healthy and Positive COVID-19 class. During simulations, the dataset split into training and testing parts at various ratios. After training, the system does not require any human intervention and can process thousands of images with high precision in a few minutes. The performance of the system is measured using standard parameters of accuracy and confusion matrix. We compared the performance of the proposed SVM-based classier with the deep-learning-based convolutional neural networks (CNN). The SVM yields better results than CNN and achieves a maximum accuracy of up to 94.12%.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"31 2","pages":"499-508"},"PeriodicalIF":3.3,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ima.22564","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25564588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信