Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization最新文献

筛选
英文 中文
Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning 基于迁移学习的牙科x射线多分类深度神经网络优化
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-09 DOI: 10.1080/21681163.2023.2272976
G. Divya Deepak, Subraya Krishna Bhat
{"title":"Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning","authors":"G. Divya Deepak, Subraya Krishna Bhat","doi":"10.1080/21681163.2023.2272976","DOIUrl":"https://doi.org/10.1080/21681163.2023.2272976","url":null,"abstract":"In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prototype smartphone jaw tracking application to quantitatively model tooth contact 一个原型智能手机颌骨跟踪应用程序,定量模拟牙齿接触
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-08 DOI: 10.1080/21681163.2023.2264402
Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto
{"title":"A prototype smartphone jaw tracking application to quantitatively model tooth contact","authors":"Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto","doi":"10.1080/21681163.2023.2264402","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264402","url":null,"abstract":"ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135340497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decorrelation stretch for enhancing colour fundus photographs affected by cataracts 去相关拉伸法增强白内障眼底彩色照片
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-02 DOI: 10.1080/21681163.2023.2274948
Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag
{"title":"Decorrelation stretch for enhancing colour fundus photographs affected by cataracts","authors":"Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag","doi":"10.1080/21681163.2023.2274948","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274948","url":null,"abstract":"ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic algorithm for feature selection in mammograms for breast masses classification 遗传算法在乳腺肿块分类中的特征选择
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-19 DOI: 10.1080/21681163.2023.2266031
None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi
{"title":"Genetic algorithm for feature selection in mammograms for breast masses classification","authors":"None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi","doi":"10.1080/21681163.2023.2266031","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266031","url":null,"abstract":"ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism DiabPrednet:基于注意的长短期记忆的糖尿病预测模型,该模型具有最优加权特征融合机制
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2258995
S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini
{"title":"DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism","authors":"S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini","doi":"10.1080/21681163.2023.2258995","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258995","url":null,"abstract":"ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs 基于广义svm的大规模超分辨率算法对轻量级医学图像分割深度神经网络设计的影响
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2266008
Mina Esfandiarkhani, Amir Hossein Foruzan
{"title":"Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs","authors":"Mina Esfandiarkhani, Amir Hossein Foruzan","doi":"10.1080/21681163.2023.2266008","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266008","url":null,"abstract":"ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models 预训练深度模型预测胸部x线图像COVID-19的通用性分析
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke 脑卒中脑损伤的人工智能分割与分类方法
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-04 DOI: 10.1080/21681163.2023.2264410
Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado
{"title":"An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke","authors":"Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado","doi":"10.1080/21681163.2023.2264410","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264410","url":null,"abstract":"ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques 基于图像处理技术的冠状肺炎和病毒性肺炎胸部疾病的自动区分系统
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-09-30 DOI: 10.1080/21681163.2023.2261575
Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat
{"title":"An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques","authors":"Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat","doi":"10.1080/21681163.2023.2261575","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261575","url":null,"abstract":"ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method 基于微阵列图像分割的子宫颈癌基因表达提取
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-09-30 DOI: 10.1080/21681163.2023.2261555
Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi
{"title":"Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method","authors":"Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi","doi":"10.1080/21681163.2023.2261555","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261555","url":null,"abstract":"It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信