Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado
{"title":"An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke","authors":"Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado","doi":"10.1080/21681163.2023.2264410","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264410","url":null,"abstract":"ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat
{"title":"An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques","authors":"Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat","doi":"10.1080/21681163.2023.2261575","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261575","url":null,"abstract":"ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method","authors":"Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi","doi":"10.1080/21681163.2023.2261555","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261555","url":null,"abstract":"It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali
{"title":"RePoint-Net detection and 3DSqU² Net segmentation for automatic identification of pulmonary nodules in computed tomography images","authors":"Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali","doi":"10.1080/21681163.2023.2258998","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258998","url":null,"abstract":"Lung cancer is a leading cause of cancer-related deaths. Computer-aided detection (CAD) has emerged as a valuable tool to assist radiologists in the automated detection and segmentation of pulmonary nodules using Computed Tomography (CT) scans, indicating early stages of lung cancer. However, detecting small nodules remains challenging. This paper proposes novel techniques to address this challenge, achieving high sensitivity and low false-positive nodule identification using the RePoint-Net detection networks. Additionally, the 3DSqU2 Net, a novel nodule segmentation approach incorporating full-scale skip connections and deep supervision, is introduced. A 3DCNN model is employed for nodule candidate classification, generating final classification results by combining previous step outputs. Extensive training and testing on the LIDC/IDRI public lung CT database dataset validate the proposed model, demonstrating its superiority over human specialists with a remarkable 97.4% sensitivity in identifying nodule candidates. Moreover, CT texture analysis accurately differentiates between malignant and benign pulmonary nodules due to its ability to capture subtle tissue characteristic differences. This approach achieves a 95.8% sensitivity in nodule classification, promising non-invasive support for clinical decision-making in managing pulmonary nodules and improving patient outcomes.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen
{"title":"The application of deep learning methods in knee joint sports injury diseases","authors":"Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen","doi":"10.1080/21681163.2023.2261554","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261554","url":null,"abstract":"ABSTRACTDeep learning is a powerful branch of machine learning, which presents a promising new approach for diagnose diseases. However, the deep learning for detecting anterior cruciate ligament still limits to the evaluation of whether there are injuries. The accuracy of the deep learning model is not high, and the parameters are complex. In this study, we have developed a deep learning model based on ResNet-18 to detect ACL conditions. The results suggest that there is no significant difference between our proposed model and two orthopaedic surgeons and radiologists in diagnosing ACL conditions.KEYWORDS: Deep-learningmachine-learningautomated modelanterior cruciate ligament Disclosure statementThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Data availability statementThis study used a MRNet dataset that gathered from Stanford University Medical Center. This dataset available online and anyone can be used.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135966897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A web-based human liver atlas","authors":"Haobo Yu, Adam Bartlett, Harvey Ho","doi":"10.1080/21681163.2023.2261557","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261557","url":null,"abstract":"The liver is the largest solid organ in the body that can be anatomically divided into segments. We present in this work a web-based subject-specific human liver atlas based on the Couinaud segments simulated from portal venous (PV) perfusion zones, hepatic arterial (HA) and hepatic venous (HV) trees, as well as biliary drainage. The purpose of the atlas is to provide the modelling community with freely accessible 3D hepatic structures for in silico simulations, which are of tremendous value in yielding novel insights in hepatic circulation, drug transport and clearance.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136130522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodality medical image fusion analysis with multi-plane features of PET and MRI images using ONSCT","authors":"Jampani Ravi, R. Narmadha","doi":"10.1080/21681163.2023.2255684","DOIUrl":"https://doi.org/10.1080/21681163.2023.2255684","url":null,"abstract":"ABSTRACTThe Multimodal Medical Image Fusion (MMIF) is affected by poor image quality, which leads to the extraction of inefficient features. The main intent of this work is to fuse various planes in the PET and MRI medical images efficiently using the MMIF approach. Initially, the sample images containing the axial plane of PET and MRI images are aggregated from standard datasets. Then, the collected images are employed for the decomposition process, which is accomplished via Optimal Non-Subsampled Contourlet Transform (ONSCT). The parameters in the NSCT are optimized using the Modified Water Strider Algorithm (MWSA. Once the images are decomposed, it is segmented into two sub-bands as high frequency and low-frequency sub-bands. Consequently, the high-frequency sub-bands of both PET and MRI images are fused by using the optimal weighted average fusion, in which the weight factor is obtained optimally by the MWSA algorithm. Similarly, the low-frequency sub-bands of both medical images are combined by sparse fusion technique. Finally, both the resultant fused images are subjected to Inverse Non-Subsampled Contourlet Transform (INSCT) to get desired fused images. The experimental findings suggest that the proposed model has effectively fused the images, and it also enhances the similarity score with axial planes.KEYWORDS: Medical image fusionmodified water strider algorithmmagnetic resonance imagingoptimal non-subsampled contourlet transforoptimal weighted average fusion Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135059109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer
{"title":"Presence of hypertension might pose a potential pitfall in detection of diabetes mellitus non-invasively using the second derivative of photoplethysmography","authors":"Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer","doi":"10.1080/21681163.2023.2256896","DOIUrl":"https://doi.org/10.1080/21681163.2023.2256896","url":null,"abstract":"ABSTRACTIndices derived from photoplethysmography (PPG) have shown promising results as non-invasive digital biomarkers for the detection of diabetes mellitus (DM). Considering the mutual endothelial insult leading to similar undesirable peripheral hemodynamic perturbations, hypertension (HT) may blunt this classification performance. Second derivative PPG (SD-PPG) indices were derived from the second derivative of the PPG signal. The variables of interest were the previously described peaks of the initial positive (a), early negative (b), re-increasing (c), late re-decreasing (d), diastolic positive (e) and negative (f) waves and the ratios between them. Patients were classified according to their type 2 DM and hypertension phenotypes. SD-PPG indices were compared between diseased subgroups, healthy controls and also dichotomous classification performance was evaluated. Two SDPPG indices, b/a ratio and the vascular ageing index (VAI = (b-c-d-e)/a) responded to isolated DM type 2 (n = 29) amongst healthy subjects (n = 106) (area under the curve (AUC) = 0.629 p = 0.034 and 0.631 p = 0.031 20 respectively). However, the classification performance became insignificant with the inclusion of HT patients (n=30). (p = 0.839 vs. p = 0.656). These results suggest that the coexistence of HT and DM may hinder the use of SD-PPG for noninvasive DM detection.KEYWORDS: Second-derivative photoplethysmographydiabetesnon-invasive cardiovascular screeningfingertip waveformshypertension Abbreviations Body Mass Index=BMIDiabetes Mellitus=DMDiabetes Mellitus Type-2=DM2Diastolic Blood Pressure=DBPHypertension=HTPhotoplethysmography=PPGSecond derivative of Photoplethysmography=SD-PPGVascular Ageing Index=VAISystolic Blood Pressure=SBPDisclosure statementNo potential conflict of interest was reported by the authors.Author contributionsStudy conception and design was made by Ahmet Tas. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) have contributed to material preparation, data collection and analysis (signal and/or statistical and/or intellectual) and interpretation of results. The first draft of the manuscript was written by Ahmet Tas, and all authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) commented on previous versions of the manuscript. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) read and approved the final manuscript.Ethics approvalThe data is retrieved from open-dataset published in Nature Scientific Data (https://www.nature.com/articles/sdata201820#). The data collection had ethical approval denoted in data descripting article and all participants have given written consent per open-data descripting a","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134969928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Balaram Allam, N. Ramesh, N. S. K. M. K. Tirumanadham
{"title":"ELM-based stroke classification using wavelet and empirical mode decomposition techniques","authors":"Balaram Allam, N. Ramesh, N. S. K. M. K. Tirumanadham","doi":"10.1080/21681163.2023.2250872","DOIUrl":"https://doi.org/10.1080/21681163.2023.2250872","url":null,"abstract":"","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73975153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient approach for detecting brain tumors using a modified artificial neural network","authors":"S. Jayachandran, Jemshia Miriam A","doi":"10.1080/21681163.2023.2245069","DOIUrl":"https://doi.org/10.1080/21681163.2023.2245069","url":null,"abstract":"","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80517641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}