Manon Kloosterman , Iris van der Schaaf , Machteld J. Boonstra , Thom F. Oostendorp , Veronique M.F. Meijborg , Ruben Coronel , Peter Loh , Peter M. van Dam
{"title":"Genesis of the T-wave through various modes of ventricular recovery patterns using the equivalent dipole layer model","authors":"Manon Kloosterman , Iris van der Schaaf , Machteld J. Boonstra , Thom F. Oostendorp , Veronique M.F. Meijborg , Ruben Coronel , Peter Loh , Peter M. van Dam","doi":"10.1016/j.compbiomed.2025.110016","DOIUrl":"10.1016/j.compbiomed.2025.110016","url":null,"abstract":"<div><h3>Background</h3><div>The equivalent dipole layer (EDL) relates local endocardial and epicardial transmembrane potentials to body surface potentials and can therefore be used to gain insight into cardiac activation and recovery. To use the EDL-source model for the inverse problem of electrocardiography, initial estimates for local activation times (LAT) and recovery times (LRT) are required because of its non-linear relation with body surface potentials.</div></div><div><h3>Objective</h3><div>To develop an AT-independent initial RT estimate in the EDL-source model.</div></div><div><h3>Methods</h3><div>Body surface mapping (BSM) and cardiac imaging were performed in 15 subjects. LAT and LRT were estimated using the EDL-source model. Various ventricular recovery patterns were tested to investigate the relation between recovery patterns and normal T-waves, including LAT-dependent-recovery and RT differences along transmural, interventricular, anterior-posterior and apico-basal axes. A new algorithm was developed based on the backwards modeling of the T-wave (<em>BackRep</em>) to identify the latest area of recovery. Correlation coefficient (CC) and relative difference (RD) between the recorded and computed T-waves were reported.</div></div><div><h3>Results</h3><div><em>BackRep</em> (CC = 0.89 [IQR:0.83–0.90]; RD = 0.63 [IQR:0.49–0.69]), outperformed the anatomical axes based recovery patterns (CC = 0.29 [IQR:0.21–0.46] – 0.79 [IQR:0.78–0.83]; RD = 1.02 [IQR:0.98–1.18] – 0.61 [IQR:0.57–0.68]) and LAT-based recovery pattern (CC = 0.63 [IQR:0.60–0.73]; RD = 4.35 [IQR:2.74–9.05]). Of the RT differences along the anatomical axes, the apico-basal recovery pattern showed the best match between recorded and computed T-waves. A significant apex-to-base RT difference was also found in the <em>BackRep</em> recovery maps.</div></div><div><h3>Conclusion</h3><div><em>BackRep</em> provides a reliable AT-independent initial RT estimate and supports the presence of an apex-to-base RT difference in normal T-wave morphology.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"189 ","pages":"Article 110016"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143637709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yeo Kyoung Won , Choong Han Kim , Jooyoung Jeon , Jiho Cha , Dong Hui Lim
{"title":"Deep learning by Vision Transformer to classify bacterial and fungal keratitis using different types of anterior segment images","authors":"Yeo Kyoung Won , Choong Han Kim , Jooyoung Jeon , Jiho Cha , Dong Hui Lim","doi":"10.1016/j.compbiomed.2025.109976","DOIUrl":"10.1016/j.compbiomed.2025.109976","url":null,"abstract":"<div><h3>Purpose</h3><div>To develop three novel Vision Transformer (ViT) frameworks for the specific diagnosis of bacterial and fungal keratitis using different types of anterior segment images and compare their performances.</div></div><div><h3>Design</h3><div>Retrospective study.</div></div><div><h3>Methods</h3><div>A ViT was used to classify bacterial and fungal keratitis. We integrated one or more ViTs by adding a vector or by using self-attention to combine different types of anterior segment images (broad-beam, slit-beam, and blue-light). We compared the area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) of the models. Cross-validation was performed thrice, and there was no overlap between the validation sets. The training/validation set was divided in an 8:2 ratio based on the number of individuals.</div></div><div><h3>Results</h3><div>A total of 283 broad-beam, 610 slit-beam, and 342 blue-light images were obtained from 79 patients. 62 (78 %) patients were assigned for training and 17 (22 %) for validation. The AUROC of ViT with broad-beam images was 0.72. The top AUROC score (0.93) was attained by combining the outputs from two ViT models utilizing self-attention, incorporating both broad-beam and slit-beam images. Similarly, the highest AUPRC score (0.93) was reached by fusing the outputs from three ViTs with self-attention, involving broad-beam, slit-beam, and blue-light images.</div></div><div><h3>Conclusions</h3><div>Despite the limited dataset, we validated ViT with self-attention to learn different types of images to improve recognition accuracy in diagnosing bacterial and fungal keratitis. ViT with self-attention has a meaningful effect on enhancing the diagnostic performance of bacterial and fungal keratitis by combining two or more types of anterior segment images.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 109976"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymmetric Convolution-based GAN Framework for Low-Dose CT Image Denoising","authors":"Naragoni Saidulu, Priya Ranjan Muduli","doi":"10.1016/j.compbiomed.2025.109965","DOIUrl":"10.1016/j.compbiomed.2025.109965","url":null,"abstract":"<div><div>Noise reduction is essential to improve the diagnostic quality of low-dose CT (LDCT) images. In this regard, data-driven denoising methods based on generative adversarial networks (GAN) have shown promising results. However, custom designs with 2D convolution may not preserve the correlation of the local and global pixels, which results in the loss of high-frequency (edges/ boundaries of lesions) anatomical details. A recent state-of-the-art method demonstrates that using primitive GAN-based methods may introduce structural (shape) distortion. To address this issue, we develop a novel asymmetric convolution-based generator network (ACGNet), which is constructed by using one-dimensional (1D) asymmetric convolutions and a dynamic attention module (DAM). The 1D asymmetric convolutions (1 × 3 & 3 × 1) can intensify the representation power of square convolution kernels (3 × 3) in horizontal and vertical directions. Consequently, we integrated the highlighted low-level CT voxel details via purposed attention DAM with high-level CT-scan features. As a result, ACGNet efficiently preserves the local and global pixel relations in denoised LDCT images. Furthermore, we propose a novel neural structure preserving loss (NSPL) through which ACGNet learns the neighborhood structure of CT images, preventing structural (shape) distortion. In addition, the ACGNet can reconstruct the CT images with human-perceived quality via back-propagated gradients due to the feature-based NSPL loss. Finally, we include differential content loss in network optimization to restore high-frequency lesion boundaries. The proposed method outperforms many state-of-the-art methods on two publicly accessible datasets: the Mayo 2016 dataset (PSNR: 35.2015 dB, SSIM: 0.9560), and Low-dose CT image and projection dataset (PSNR: 35.2825 dB, SSIM: 0.9566).</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 109965"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neuro_DeFused-Net: A novel multi-scale 2DCNN architecture assisted diagnostic model for Parkinson's disease diagnosis using deep feature-level fusion of multi-site multi-modality neuroimaging data","authors":"Sachin Kumar, Sourabh Shastri, Vibhakar Mansotra","doi":"10.1016/j.compbiomed.2025.110029","DOIUrl":"10.1016/j.compbiomed.2025.110029","url":null,"abstract":"<div><h3>Background</h3><div>Neurological disorders, particularly Parkinson's Disease (PD), are serious and progressive conditions that significantly impact patients' motor functions and overall quality of life. Accurate and timely diagnosis is still crucial, but it is quite challenging. Understanding the changes in the brain linked to PD requires using neuroimaging modalities like magnetic resonance imaging (MRI). Artificial intelligence (AI), particularly deep learning (DL) methods, can potentially improve the precision of diagnosis.</div></div><div><h3>Method</h3><div>In the current study, we present a novel approach that integrates T1-weighted structural MRI and rest-state functional MRI using multi-site-cum-multi-modality neuroimaging data. To maximize the richness of the data, our approach integrates deep feature-level fusion across these modalities. We proposed a custom multi-scale 2D Convolutional Neural Network (CNN) architecture that captures features at different spatial scales, enhancing the model's capacity to learn PD-related complex patterns.</div></div><div><h3>Results</h3><div>With an accuracy of 97.12 %, sensitivity of 97.26 %, F1-Score of 97.63 %, Area Under the Curve (AUC) of 0.99, mean average precision (mAP) of 99.53 %, and Dice Coefficient of 0.97, the proposed Neuro_DeFused-Net diagnostic model performs exceptionally well. These results highlight the model's robust ability to distinguish PD patients from Controls (Normal), even across a variety of datasets and neuroimaging modalities.</div></div><div><h3>Conclusions</h3><div>Our findings demonstrate the transformational ability of AI-driven models to facilitate the early diagnosis of PD. The proposed Neuro_DeFused-Net model enables the rapid detection of health markers through fast analysis of complicated neuroimaging data. Thus, timely intervention and individualized treatment strategies lead to improved patient outcomes and quality of life.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 110029"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Voß , Uli Niemann , Sylvia Saalfeld , Gábor Janiga , Philipp Berg
{"title":"Impact of workflow variability on image-based intracranial aneurysm hemodynamics","authors":"Samuel Voß , Uli Niemann , Sylvia Saalfeld , Gábor Janiga , Philipp Berg","doi":"10.1016/j.compbiomed.2025.110018","DOIUrl":"10.1016/j.compbiomed.2025.110018","url":null,"abstract":"<div><div>The interplay between intracranial aneurysm progression and hemodynamics motivates the application of image-based blood flow quantification, providing potential for the identification of high-risk aneurysms, treatment planning, and implant optimization. However, uncertainties arise throughout the interdisciplinary process, from medical imaging to parameter evaluation.</div><div>This study systematically analyzes uncertainty globally, at individual workflow steps and for potential interactions. Eight factors affecting hemodynamic simulation accuracy — image reconstruction, lumen segmentation, surface smoothing, rheological modeling, inlet/outlet boundary condition, ostium/parent vessel definition — are varied for four representative patient-specific intracranial aneurysms. A total of 1024 transient simulations are evaluated considering twelve hemodynamic parameters to assess marginal and interaction effects.</div><div>Global uncertainty analysis reveals median absolute deviations of 20.8–25.9 % for maximum velocity (Vmax), 6.8–19.2 % for inflow concentration index (ICI), 10.8–40.8 % for normalized wall shear stress (WSSnorm) and 2.8–48.9 % for low shear area (LSA). Isolated variation demonstrates the highest median deviations for the reconstruction algorithm (Vmax: 4.0–7.0 %, ICI: 6.8–18.9 %, WSSnorm: 13.3–25.1 %, LSA: 2.4–16.0 %), inlet (Vmax: 41.5–52.4 %, ICI: 1.4–8.6 %, WSSnorm: 14.6–28.5 %, LSA: 5.5–93.5 %) and outlet boundary condition (Vmax: 2.0–36.5 %, ICI: 0.6–39.9 %, WSSnorm: 2.4–83.2 %, LSA: 1.9–53.5 %). Lowest median deviations are found for rheological modeling and surface smoothing. Only minor interaction effects are observed between the reconstruction algorithm and inlet definition, as well as between inlet and outlet definitions.</div><div>This study identifies pivotal variables essential for consistent hemodynamic quantification of intracranial aneurysms. Minimal interaction effects validate the isolated analysis of influencing factors in the majority of cases.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 110018"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenrui Li , Wuxian He , Xuechen Zhang , Hongtu Tang , Jia Li , Xiaoyan Shen , Huafeng Liu , Weichuan Yu
{"title":"Time-series atlases of the rat brain after middle cerebral artery occlusion using FDG-PET images","authors":"Chenrui Li , Wuxian He , Xuechen Zhang , Hongtu Tang , Jia Li , Xiaoyan Shen , Huafeng Liu , Weichuan Yu","doi":"10.1016/j.compbiomed.2025.109977","DOIUrl":"10.1016/j.compbiomed.2025.109977","url":null,"abstract":"<div><div>The middle cerebral artery occlusion (MCAO) procedure is widely used in ischemic stroke research. When using functional 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images to study ischemic stroke, the difficulty in determining the region of interest (ROI) comes from two aspects: the large variations due to differences in uptake and reaction time and the consistency of different intensity normalization methods among subjects.</div><div>Using the rat as a model animal, we propose time-series atlases of ischemic stroke after MCAO based on the PET images to annotate changes in ROIs. Concretely, we spatially align serial scans with a built PET template, use histograms of orientated gradient (HOG) features to detect lesion boundaries, and combine them with results from an intensity-based detection method to construct probability maps at different time points with the Bernoulli mixture model (BMM). Simulated PET images with known ground truth and triphenyl tetrazolium chloride (TTC) staining slices validate the correctness of the time-series atlases. We demonstrate that these atlases could provide references when tracking the spatial–temporal dynamic development of lesions in rat brains.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 109977"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanae Elmekki , Ahmed Alagha , Hani Sami , Amanda Spilkin , Antonela Mariel Zanuttini , Ehsan Zakeri , Jamal Bentahar , Lyes Kadem , Wen-Fang Xie , Philippe Pibarot , Rabeb Mizouni , Hadi Otrok , Shakti Singh , Azzam Mourad
{"title":"CACTUS: An open dataset and framework for automated Cardiac Assessment and Classification of Ultrasound images using deep transfer learning","authors":"Hanae Elmekki , Ahmed Alagha , Hani Sami , Amanda Spilkin , Antonela Mariel Zanuttini , Ehsan Zakeri , Jamal Bentahar , Lyes Kadem , Wen-Fang Xie , Philippe Pibarot , Rabeb Mizouni , Hadi Otrok , Shakti Singh , Azzam Mourad","doi":"10.1016/j.compbiomed.2025.110003","DOIUrl":"10.1016/j.compbiomed.2025.110003","url":null,"abstract":"<div><div>Cardiac ultrasound (US) scanning is one of the most commonly used techniques in cardiology to diagnose the health of the heart and its proper functioning. During a typical US scan, medical professionals take several images of the heart to be classified based on the cardiac views they contain, with a focus on high-quality images. However, this task is time consuming and error prone. Therefore, it is necessary to consider ways to automate these tasks and assist medical professionals in classifying and assessing cardiac US images. Machine learning (ML) techniques are regarded as a prominent solution due to their success in the development of numerous applications aimed at enhancing the medical field, including addressing the shortage of echography technicians. However, the limited availability of medical data presents a significant barrier to the application of ML in the field of cardiology, particularly regarding US images of the heart. This paper addresses this challenge by introducing the first open graded dataset for Cardiac Assessment and ClassificaTion of UltraSound (CACTUS), which is available online. This dataset contains images obtained from scanning a CAE Blue Phantom and representing various heart views and different quality levels, exceeding the conventional cardiac views typically found in literature. Additionally, the paper introduces a Deep Learning (DL) framework consisting of two main components. The first component is responsible for classifying cardiac US images based on the heart view using a Convolutional Neural Network (CNN) architecture. The second component uses the concept of Transfer Learning (TL) to utilize knowledge from the first component and fine-tune it to create a model for grading and assessing cardiac images. The framework demonstrates high performance in both classification and grading, achieving up to 99.43% accuracy and as low as 0.3067 error, respectively. To showcase its robustness, the framework is further fine-tuned using new images representing additional cardiac views and also compared to several other state-of-the-art architectures. The framework’s outcomes and its performance in handling real-time scans were also assessed using a questionnaire answered by cardiac experts.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 110003"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ponduri Vasanthi , Lingamallu Naga Srinivasu , Ventrapragada Teju , Kambhampati Venkata Sowmya , Anca Stan , Valentin Sita , Liviu Miclea , Ovidiu Stan
{"title":"Multiple kidney stones prediction with efficient RT-DETR model","authors":"Ponduri Vasanthi , Lingamallu Naga Srinivasu , Ventrapragada Teju , Kambhampati Venkata Sowmya , Anca Stan , Valentin Sita , Liviu Miclea , Ovidiu Stan","doi":"10.1016/j.compbiomed.2025.110023","DOIUrl":"10.1016/j.compbiomed.2025.110023","url":null,"abstract":"<div><h3>Background and objective</h3><div>Kidney stone detection (KSD) is a critical aspect of medical imaging that necessitates precise identification and classification through computed tomography (CT) scans. The precise identification of the specific type of kidney stone is imperative for the administration of effective treatment. Although contemporary models demonstrate proficiency in classification tasks, they often face challenges when multiple stones are present within a single CT slice and frequently suffer from slow prediction times. This study aims to address these challenges by introducing the RT-DETR model, a novel approach designed for efficient and accurate detection of multiple kidney stones. The code for the proposed model is publicly available in <span><span>https://github.com/vasanthi457/Kidney-Stone-Detection-using-RTDETR.git</span><svg><path></path></svg></span>.</div></div><div><h3>Methods</h3><div>The RT-DETR model utilizes an innovative hybrid encoder that processes multi-scale features through Attention-based Intra-Scale Feature Interaction (AIFI) and a Cross-Scale Feature-fusion Module (CSFM). Furthermore, an Intersection over Union (IoU)-aware query selection mechanism is incorporated to improve the accuracy of detecting heterogeneous kidney stones. The model's performance was evaluated using a dataset of CT images annotated with kidney stone locations.</div></div><div><h3>Results</h3><div>The RT-DETR proposed model achieved a precision (P) of 74.3 %, a recall (R) of 91 %, a mean Average Precision (mAP) of 73.3 %, an F1-Score of 73 %, an accuracy of 82.65 %, and a detection time of 1.043 s in detecting small kidney stones. These results surpass those of existing models like Optimized Fusion (OF), Kronecker Convolution (KC), Exemplar Darknet19 (ED19), Deep Learning (DL), Ensemble Learning (EL), YOLOv5, and YOLOv8, which had lower metric values. Moreover, the statistical tests confirmed that the RT-DETR model exhibits a significant difference in performance compared to other models in kidney stone prediction.</div></div><div><h3>Conclusions</h3><div>The RT-DETR model represents a significant advancement in kidney stone detection, offering both efficiency and accuracy. By effectively managing multi-scale features and enhancing detection with IoU-aware queries, the RT-DETR model improves diagnostic capabilities and treatment strategies for patients with kidney stones. This innovation has the potential to streamline clinical workflows and enhance patient outcomes through more precise and timely diagnosis.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 110023"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Process management in diabetes treatment by blending technique","authors":"Yunus Hazar, Ömer Faruk Ertuğrul","doi":"10.1016/j.compbiomed.2025.110034","DOIUrl":"10.1016/j.compbiomed.2025.110034","url":null,"abstract":"<div><div>Diabetes is a condition marked by persistent metabolic issues and elevated blood glucose levels, which can cause to damage several organs, including eyes, heart, kidneys and nervous system. Effective management of this disease vital to mitigate long-term complications. This research uses advanced AI and ML methods, based on data from E-Nabız personal health record, to predict blood glucose levels in people with diabetes and identify factors that affect these levels. The study is primarily aimed at monitoring and managing diabetes and will investigate whether the condition of diabetic individuals improves. Within this framework, 108 features and 86115 records sourced from E-Nabız including lab results, medical history and medication records were examined to determine key indicators of diabetes management. Features used were selected by intersecting the best 20 features determined by 9 techniques using SFM, MI, RFE, CHI2, ANOVA, KW, CATB, XGB and LGBM. Selected features were evaluated using blending technique, with CATB, XGB and LGBM as first models and ETC as meta-model. Blending approach produced a strong performance, achieving 92.52 % precision, 92.51 % recall, 92.51 % F1-score and 92.50 % accuracy in final score. This approach leverages the strengths of different classification models, reducing weaknesses, increasing reliability and improving overall performance by better representing various features of dataset. While literature generally focuses on single model or traditional ensemble methods, this work presents a more advanced and effective combination strategy. It also highlights the important role that certain factors such as age, medications and cholesterol levels play in diabetes assessment. This study contributes to literature on both theoretical and practical levels by increasing applicability of AI applications in clinical practice and health management. These findings could help healthcare professionals better monitor patient's condition, develop more personalized approaches, and help ensure a positive patient response to treatment.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 110034"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Connor J. Robinson , Bruce Dickie , Claudia Lindner , Jeremy Herrera , Lewis Dingle , Adam J. Reid , Jason K.F. Wong , Paul Hiebert , Timothy F. Cootes , Svitlana Kurinna
{"title":"Complex wound analysis using AI","authors":"Connor J. Robinson , Bruce Dickie , Claudia Lindner , Jeremy Herrera , Lewis Dingle , Adam J. Reid , Jason K.F. Wong , Paul Hiebert , Timothy F. Cootes , Svitlana Kurinna","doi":"10.1016/j.compbiomed.2025.109945","DOIUrl":"10.1016/j.compbiomed.2025.109945","url":null,"abstract":"<div><div>Impaired wound healing is a significant clinical challenge. Standard wound analysis approaches are macroscopic, with limited histological assessments that rely on visual inspection of haematoxylin and eosin (H&E)-stained sections of biopsies. The analysis is time-consuming, requires a specialist trained to recognise various wound features, and therefore is often omitted in practice. We present an automated deep-learning (DL) approach capable of objectively and comprehensively analysing images of H&E-stained wound sections. Our model has a deep neural network (DNN) architecture, optimised for segmentation of characteristic wound features. We employed our model for the first-time analysis of human complex wounds. Histologically, human wounds are extremely variable, which presented a challenge when segmenting the different tissue classes. To validate our approach, we used mouse wound biopsy images across four timepoints of healing and employed the same DNN architecture for training and analysis in this context (89 % mean test set accuracy). We revised our approach for human complex wounds, analysing the biopsies at a cellular level, where our model performance improved (97 % mean test set accuracy). Together, our approach allows: (i) comprehensive analysis of human wound biopsy images; (ii) in-depth analysis of key features of mouse wound healing with accurate morphometric analysis and; (iii) analysis and quantification of immune cell infiltration, to aid clinical diagnosis of human complex wounds.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"190 ","pages":"Article 109945"},"PeriodicalIF":7.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}