Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar
{"title":"Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks","authors":"Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar","doi":"10.1101/2020.08.25.20182170","DOIUrl":"https://doi.org/10.1101/2020.08.25.20182170","url":null,"abstract":"Novel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID- 19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images. Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was derived from several open sources of X-Rays, and the data available online. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PYCM was used to support the statistical parameters. The study revealed the superiority of Model vgg16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep Learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43397552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Algorithm of <i>l</i> <sub>1</sub>-Norm and <i>l</i> <sub>0</sub>-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection.","authors":"Xiezhang Li, Guocan Feng, Jiehua Zhu","doi":"10.1155/2020/8873865","DOIUrl":"https://doi.org/10.1155/2020/8873865","url":null,"abstract":"<p><p>The <i>l</i> <sub>1</sub>-norm regularization has attracted attention for image reconstruction in computed tomography. The <i>l</i> <sub>0</sub>-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined <i>l</i> <sub>1</sub>-norm and <i>l</i> <sub>0</sub>-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving <i>l</i> <sub>0</sub>-norm regularization.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8873865","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38361996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohd Zulfaezal Che Azemin, Radhiana Hassan, Mohd Izzuddin Mohd Tamrin, Mohd Adli Md Ali
{"title":"COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings.","authors":"Mohd Zulfaezal Che Azemin, Radhiana Hassan, Mohd Izzuddin Mohd Tamrin, Mohd Adli Md Ali","doi":"10.1155/2020/8828855","DOIUrl":"https://doi.org/10.1155/2020/8828855","url":null,"abstract":"<p><p>The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8828855","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38313824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of Low-Pass Filters for SPECT Imaging.","authors":"Inayatullah S Sayed, Siti S Ismail","doi":"10.1155/2020/9239753","DOIUrl":"https://doi.org/10.1155/2020/9239753","url":null,"abstract":"<p><p>In single photon emission computed tomography (SPECT) imaging, the choice of a suitable filter and its parameters for noise reduction purposes is a big challenge. Adverse effects on image quality arise if an improper filter is selected. Filtered back projection (FBP) is the most popular technique for image reconstruction in SPECT. With this technique, different types of reconstruction filters are used, such as the Butterworth and the Hamming. In this study, the effects on the quality of reconstructed images of the Butterworth filter were compared with the ones of the Hamming filter. A Philips ADAC forte gamma camera was used. A low-energy, high-resolution collimator was installed on the gamma camera. SPECT data were acquired by scanning a phantom with an insert composed of hot and cold regions. A Technetium-99m radioactive solution was homogenously mixed into the phantom. Furthermore, a symmetrical energy window (20%) centered at 140 keV was adjusted. Images were reconstructed by the FBP method. Various cutoff frequency values, namely, 0.35, 0.40, 0.45, and 0.50 cycles/cm, were selected for both filters, whereas for the Butterworth filter, the order was set at 7. Images of hot and cold regions were analyzed in terms of detectability, contrast, and signal-to-noise ratio (SNR). The findings of our study indicate that the Butterworth filter was able to expose more hot and cold regions in reconstructed images. In addition, higher contrast values were recorded, as compared to the Hamming filter. However, with the Butterworth filter, the decrease in SNR for both types of regions with the increase in cutoff frequency as compared to the Hamming filter was obtained. Overall, the Butterworth filter under investigation provided superior results than the Hamming filter. Effects of both filters on the quality of hot and cold region images varied with the change in cutoff frequency.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/9239753","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37849424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fully Automated Bone Age Assessment on Large-Scale Hand X-Ray Dataset.","authors":"Xiaoying Pan, Yizhe Zhao, Hao Chen, De Wei, Chen Zhao, Zhi Wei","doi":"10.1155/2020/8460493","DOIUrl":"https://doi.org/10.1155/2020/8460493","url":null,"abstract":"<p><p>Bone age assessment (BAA) is an essential topic in the clinical practice of evaluating the biological maturity of children. Because the manual method is time-consuming and prone to observer variability, it is attractive to develop computer-aided and automated methods for BAA. In this paper, we present a fully automatic BAA method. To eliminate noise in a raw X-ray image, we start with using U-Net to precisely segment hand mask image from a raw X-ray image. Even though U-Net can perform the segmentation with high precision, it needs a bigger annotated dataset. To alleviate the annotation burden, we propose to use deep active learning (AL) to select unlabeled data samples with sufficient information intentionally. These samples are given to Oracle for annotation. After that, they are then used for subsequential training. In the beginning, only 300 data are manually annotated and then the improved U-Net within the AL framework can robustly segment all the 12611 images in RSNA dataset. The AL segmentation model achieved a Dice score at 0.95 in the annotated testing set. To optimize the learning process, we employ six off-the-shell deep Convolutional Neural Networks (CNNs) with pretrained weights on ImageNet. We use them to extract features of preprocessed hand images with a transfer learning technique. In the end, a variety of ensemble regression algorithms are applied to perform BAA. Besides, we choose a specific CNN to extract features and explain why we select that CNN. Experimental results show that the proposed approach achieved discrepancy between manual and predicted bone age of about 6.96 and 7.35 months for male and female cohorts, respectively, on the RSNA dataset. These accuracies are comparable to state-of-the-art performance.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8460493","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37752031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah E Shelton, Jodi Stone, Fei Gao, Donglin Zeng, Paul A Dayton
{"title":"Microvascular Ultrasonic Imaging of Angiogenesis Identifies Tumors in a Murine Spontaneous Breast Cancer Model.","authors":"Sarah E Shelton, Jodi Stone, Fei Gao, Donglin Zeng, Paul A Dayton","doi":"10.1155/2020/7862089","DOIUrl":"https://doi.org/10.1155/2020/7862089","url":null,"abstract":"<p><p>The purpose of this study is to determine if microvascular tortuosity can be used as an imaging biomarker for the presence of tumor-associated angiogenesis and if imaging this biomarker can be used as a specific and sensitive method of locating solid tumors. Acoustic angiography, an ultrasound-based microvascular imaging technology, was used to visualize angiogenesis development of a spontaneous mouse model of breast cancer (<i>n</i> = 48). A reader study was used to assess visual discrimination between image types, and quantitative methods utilized metrics of tortuosity and spatial clustering for tumor detection. The reader study resulted in an area under the curve of 0.8, while the clustering approach resulted in the best classification with an area under the curve of 0.95. Both the qualitative and quantitative methods produced a correlation between sensitivity and tumor diameter. Imaging of vascular geometry with acoustic angiography provides a robust method for discriminating between tumor and healthy tissue in a mouse model of breast cancer. Multiple methods of analysis have been presented for a wide range of tumor sizes. Application of these techniques to clinical imaging could improve breast cancer diagnosis, as well as improve specificity in assessing cancer in other tissues. The clustering approach may be beneficial for other types of morphological analysis beyond vascular ultrasound images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/7862089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37670230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection and Localization of Early-Stage Multiple Brain Tumors Using a Hybrid Technique of Patch-Based Processing, k-means Clustering and Object Counting.","authors":"Mohamed Nasor, Walid Obaid","doi":"10.1155/2020/9035096","DOIUrl":"https://doi.org/10.1155/2020/9035096","url":null,"abstract":"<p><p>Brain tumors are a major health problem that affect the lives of many people. These tumors are classified as benign or cancerous. The latter can be fatal if not properly diagnosed and treated. Therefore, the diagnosis of brain tumors at the early stages of their development can significantly improve the chances of patient's full recovery after treatment. In addition to laboratory analyses, clinicians and surgeons extract information from medical images, recorded by various systems such as magnetic resonance imaging (MRI), X-ray, and computed tomography (CT). The extracted information is used to identify the essential characteristics of brain tumors (location, size, and type) in order to achieve an accurate diagnosis to determine the most appropriate treatment protocol. In this paper, we present an automated machine vision technique for the detection and localization of brain tumors in MRI images at their very early stages using a combination of <i>k</i>-means clustering, patch-based image processing, object counting, and tumor evaluation. The technique was tested on twenty real MRI images and was found to be capable of detecting multiple tumors in MRI images regardless of their intensity level variations, size, and location including those with very small sizes. In addition to its use for diagnosis, the technique can be integrated into automated treatment instruments and robotic surgery systems.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/9035096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38010019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery”","authors":"Siming Bayer, A. Maier, M. Ostermeier, R. Fahrig","doi":"10.1155/2019/9249016","DOIUrl":"https://doi.org/10.1155/2019/9249016","url":null,"abstract":"[This corrects the article DOI: 10.1155/2017/6028645.].","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/9249016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48243249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario Amrehn, S. Steidl, Reinier Kortekaas, Maddalena Strumia, M. Weingarten, M. Kowarschik, A. Maier
{"title":"A Semi-Automated Usability Evaluation Framework for Interactive Image Segmentation Systems","authors":"Mario Amrehn, S. Steidl, Reinier Kortekaas, Maddalena Strumia, M. Weingarten, M. Kowarschik, A. Maier","doi":"10.1155/2019/1464592","DOIUrl":"https://doi.org/10.1155/2019/1464592","url":null,"abstract":"For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer interaction (HCI) is an essential part of interactive image segmentation. Nevertheless, publications introducing novel interactive segmentation systems (ISS) often lack an objective comparison of HCI aspects. It is demonstrated that even when the underlying segmentation algorithm is the same throughout interactive prototypes, their user experience may vary substantially. As a result, users prefer simple interfaces as well as a considerable degree of freedom to control each iterative step of the segmentation. In this article, an objective method for the comparison of ISS is proposed, based on extensive user studies. A summative qualitative content analysis is conducted via abstraction of visual and verbal feedback given by the participants. A direct assessment of the segmentation system is executed by the users via the system usability scale (SUS) and AttrakDiff-2 questionnaires. Furthermore, an approximation of the findings regarding usability aspects in those studies is introduced, conducted solely from the system-measurable user actions during their usage of interactive segmentation prototypes. The prediction of all questionnaire results has an average relative error of 8.9%, which is close to the expected precision of the questionnaire results themselves. This automated evaluation scheme may significantly reduce the resources necessary to investigate each variation of a prototype's user interface (UI) features and segmentation methodologies.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/1464592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47847902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Estimation of Acute Infarct Volume from Noncontrast Head CT Using Image Intensity Inhomogeneity Correction","authors":"K. Cauley, G. Mongelluzzo, S. Fielden","doi":"10.1155/2019/1720270","DOIUrl":"https://doi.org/10.1155/2019/1720270","url":null,"abstract":"Identification of early ischemic changes (EIC) on noncontrast head CT scans performed within the first few hours of stroke onset may have important implications for subsequent treatment, though early stroke is poorly delimited on these studies. Lack of sharp lesion boundary delineation in early infarcts precludes manual volume measures, as well as measures using edge-detection or region-filling algorithms. We wished to test a hypothesis that image intensity inhomogeneity correction may provide a sensitive method for identifying the subtle regional hypodensity which is characteristic of early ischemic infarcts. A digital image analysis algorithm was developed using image intensity inhomogeneity correction (IIC) and intensity thresholding. Two different IIC algorithms (FSL and ITK) were compared. The method was evaluated using simulated infarcts and clinical cases. For synthetic infarcts, measured infarct volumes demonstrated strong correlation to the true lesion volume (for 20% decreased density “infarcts,” Pearson r = 0.998 for both algorithms); both algorithms demonstrated improved accuracy with increasing lesion size and decreasing lesion density. In clinical cases (41 acute infarcts in 30 patients), calculated infarct volumes using FSL IIC correlated with the ASPECTS scores (Pearson r = 0.680) and the admission NIHSS (Pearson r = 0.544). Calculated infarct volumes were highly correlated with the clinical decision to treat with IV-tPA. Image intensity inhomogeneity correction, when applied to noncontrast head CT, provides a tool for image analysis to aid in detection of EIC, as well as to evaluate and guide improvements in scan quality for optimal detection of EIC.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2019-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2019/1720270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48989856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}