Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization最新文献

筛选
英文 中文
Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning 基于迁移学习的牙科x射线多分类深度神经网络优化
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-09 DOI: 10.1080/21681163.2023.2272976
G. Divya Deepak, Subraya Krishna Bhat
{"title":"Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning","authors":"G. Divya Deepak, Subraya Krishna Bhat","doi":"10.1080/21681163.2023.2272976","DOIUrl":"https://doi.org/10.1080/21681163.2023.2272976","url":null,"abstract":"In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":" 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prototype smartphone jaw tracking application to quantitatively model tooth contact 一个原型智能手机颌骨跟踪应用程序,定量模拟牙齿接触
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-08 DOI: 10.1080/21681163.2023.2264402
Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto
{"title":"A prototype smartphone jaw tracking application to quantitatively model tooth contact","authors":"Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto","doi":"10.1080/21681163.2023.2264402","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264402","url":null,"abstract":"ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":" 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135340497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset 在新的x射线图像数据集中使用深度学习方法进行犬髋关节发育不良的计算机辅助诊断
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-02 DOI: 10.1080/21681163.2023.2274947
Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur
{"title":"Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset","authors":"Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur","doi":"10.1080/21681163.2023.2274947","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274947","url":null,"abstract":"ABSTRACTCanine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.KEYWORDS: Canine Hip Dysplasia diagnosisdeep learningtransfer learningX-rayimage classification AcknowledgementSpecial thanks to Dr. Samir DJEMAI, a lecturer at the National Veterinary Institute of the University of Constantine, and the DHONDT NUNES veterinary clinic in France for providing the authors with dog hip radiographic images. This work would not have been possible without their invaluable assistance.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsChaouki BoufenarChaouki Boufenar is an Algerian scientist and researcher known for his work in the field of artificial intelligence and data science. He is currently a lecturer at the Computer Science Department of the University of Algiers. He received a Ph.D. in Computer Science from the University of Constantine 2 (Abdelhamid Mehri) in 2018. Chaouki Boufenar has been affiliated with several academic and research institutions, including the University of Paris-Saclay (Laboratoire de Recherche en Informatique), the University of Constantine, and the University of Jijel in Algeria. He has published several research papers and articles in the field of Computer Science and Artificial Intelligence. His areas of interest include data science, deep learning, and computer vision.Tété Elom Mike Norbert LogoviTete Elom Mike Norbert Logovi is currently working as a teaching assistant at Laval University. He is also currently pursuing his M.Sc. degree in Computer Science with a thesis at the same university. He received his Bachelor's degree in Computer Systems from the Department of Computer Science at Benyoucef Benkhedda Algiers 1 University. His research area includes Machine Learning, Deep Learning, and Computer Vision.Djemai SamirDjemai Samir is currently a lecturer and researcher at the Institute of Veterinary Sciences","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"9 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135936141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decorrelation stretch for enhancing colour fundus photographs affected by cataracts 去相关拉伸法增强白内障眼底彩色照片
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-11-02 DOI: 10.1080/21681163.2023.2274948
Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag
{"title":"Decorrelation stretch for enhancing colour fundus photographs affected by cataracts","authors":"Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag","doi":"10.1080/21681163.2023.2274948","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274948","url":null,"abstract":"ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"20 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic algorithm for feature selection in mammograms for breast masses classification 遗传算法在乳腺肿块分类中的特征选择
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-19 DOI: 10.1080/21681163.2023.2266031
None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi
{"title":"Genetic algorithm for feature selection in mammograms for breast masses classification","authors":"None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi","doi":"10.1080/21681163.2023.2266031","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266031","url":null,"abstract":"ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid generative model for grading the severity of diabetic retinopathy images 糖尿病视网膜病变图像严重程度分级的混合生成模型
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-15 DOI: 10.1080/21681163.2023.2266048
R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela
{"title":"Hybrid generative model for grading the severity of diabetic retinopathy images","authors":"R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela","doi":"10.1080/21681163.2023.2266048","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266048","url":null,"abstract":"ABSTRACTOne of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively.KEYWORDS: Retinal imagesCNN feature extractionsupport vector machineGaussian mixture model Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationNotes on contributorsR. BhuvaneswariR. Bhuvaneswari (Member, IEEE) received the Ph.D. degree from Anna University. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 18 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and co-authored a book on computer graphics. Her research interests include machine learning and deep learning for image processing applications.M. DiviyaM.Diviya received the M.E . degree from Anna University. Currently pursuing Ph.D in Vellore Institute of Technology, Chennai. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 7 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and book chapters. Her research interests include machine learning and deep learning for image processing,text processing applications.M. SubramanianSubramanian M received a BE degree in Mechanical Engineering from 2008, and he obtained ME degrees in computer aided design and engineering design in 2011 and 2013, respectively. He is pursuing his PhD degree from Anna University, Chennai, Tamilnadu, India in the field of material","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136185046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism DiabPrednet:基于注意的长短期记忆的糖尿病预测模型,该模型具有最优加权特征融合机制
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2258995
S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini
{"title":"DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism","authors":"S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini","doi":"10.1080/21681163.2023.2258995","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258995","url":null,"abstract":"ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs 基于广义svm的大规模超分辨率算法对轻量级医学图像分割深度神经网络设计的影响
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2266008
Mina Esfandiarkhani, Amir Hossein Foruzan
{"title":"Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs","authors":"Mina Esfandiarkhani, Amir Hossein Foruzan","doi":"10.1080/21681163.2023.2266008","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266008","url":null,"abstract":"ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models 预训练深度模型预测胸部x线图像COVID-19的通用性分析
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and prediction of diabetes using effective biomarkers 利用有效的生物标志物检测和预测糖尿病
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization Pub Date : 2023-10-05 DOI: 10.1080/21681163.2023.2264937
Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber
{"title":"Detection and prediction of diabetes using effective biomarkers","authors":"Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber","doi":"10.1080/21681163.2023.2264937","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264937","url":null,"abstract":"ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating ‎its progression and complications. The diagnostic process often contends with data ‎ambiguity and decision uncertainty, adding complexity to achieving definitive ‎outcomes. This study addresses the diabetes diagnostic challenge through data mining ‎and machine learning techniques. It involves training various machine learning ‎algorithms and conducting statistical analysis on a dataset comprising 520 patients, ‎encompassing both normal and diabetic cases, to discern influential features.‎ Incorporating 17 features as classifier inputs, this research evaluates the diagnostic ‎performance using four reputable techniques: support vector machine (SVM), random ‎forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes ‎underscore the SVM model's superior performance, boasting accuracy, specificity, and ‎sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, ‎respectively, across 50 iterations. The findings establish SVM as the preferred method ‎for diabetes diagnosis.‎ This study highlights the efficacy of data mining and machine learning models in ‎diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their ‎integration with a physician's assessment promises even better patient outcomes.‎KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-‎‎bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, ‎and earned his M.S. degree in biomedical engineering-bioelectric from Science and ‎Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately ‎dedicated to the examination and interpretation of biomedical data, particularly in ‎the context of disease prediction and detection. His academic pursuits involve in-‎depth exploration of biomedical data analysi","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信