Shuo Wu, Xianming Wang, Bo Wang, Tianxiao Zhang, Shengnan Lyu
{"title":"Multi-objective RVEA optimization of a closed-cell segmented stent retriever for balancing biomechanical forces","authors":"Shuo Wu, Xianming Wang, Bo Wang, Tianxiao Zhang, Shengnan Lyu","doi":"10.1016/j.cmpb.2025.108816","DOIUrl":"10.1016/j.cmpb.2025.108816","url":null,"abstract":"<div><div>Mechanical thrombectomy for acute ischemic stroke (AIS) faces significant challenges in complex vascular geometries, where existing stent retrievers often struggle to capture thrombus effectively. This study introduces an enhanced closed-cell segmented flexible (ECSF) stent retriever designed to address these challenges and improve thrombus capture, particularly in intricate vascular environments. Fabricated from superelastic Nitinol, the ECSF stent underwent a multi-objective optimization process using the RVEA algorithm to balance key performance metrics, including radial resistive force, hoop force, chronic outward force, and volume, while ensuring the strain remained within Nitinol’s superelastic limit. Finite element analysis demonstrated the ECSF stent’s superior mechanical performance compared to a commercial stent. In vitro experiments further validated its effectiveness in capturing and removing thrombus in complex vessel conditions, highlighting its potential as a promising solution for AIS treatment, even under suboptimal deployment conditions.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108816"},"PeriodicalIF":4.9,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143947865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Sun , Chuanba Liu , Wenshuo Wang , Yimin Song , Tao Sun
{"title":"UltrasOM: A mamba-based network for 3D freehand ultrasound reconstruction using optical flow","authors":"Rui Sun , Chuanba Liu , Wenshuo Wang , Yimin Song , Tao Sun","doi":"10.1016/j.cmpb.2025.108843","DOIUrl":"10.1016/j.cmpb.2025.108843","url":null,"abstract":"<div><h3>Background</h3><div>Three-dimensional (3D) ultrasound (US) reconstruction is of significant value in clinical diagnosis, characterized by its safety, portability, low cost, and high real-time capabilities. 3D freehand ultrasound reconstruction aims to eliminate the need for tracking devices, relying solely on image data to infer the spatial relationships between frames. However, inherent jitter during handheld scanning introduces significant inaccuracies, making current methods ineffective in precisely predicting the spatial motions of ultrasound image frames. This leads to substantial cumulative errors over long sequence modeling, resulting in deformations or artifacts in the reconstructed volume. To address these challenges, we proposed UltrasOM, a 3D ultrasound reconstruction network designed for spatial relative motion estimation.</div></div><div><h3>Methods</h3><div>Initially, we designed a video embedding module that integrates optical flow dynamics with original static information to enhance motion change features between frames. Next, we developed a Mamba-based spatiotemporal attention module, utilizing multi-layer stacked Space-Time Blocks to effectively capture global spatiotemporal correlations within video frame sequences. Finally, we incorporated correlation loss and motion speed loss to prevent overfitting related to scanning speed and pose, enhancing the model's generalization capability.</div></div><div><h3>Results</h3><div>Experimental results on a dataset of 200 forearm cases, comprising 58,011 frames, demonstrated that the proposed method achieved a final drift rate (FDR) of 10.24 %, a frame-to-frame distance error (DE) of 7.34 mm, a symmetric Hausdorff distance error (HD) of 10.81 mm, and a mean angular error (MEA) of 2.05°, outperforming state-of-the-art methods by 13.24 %, 15.11 %, 3.57 %, and 6.32 %, respectively.</div></div><div><h3>Conclusion</h3><div>By integrating optical flow features and deeply exploring contextual spatiotemporal dependencies, the proposed network can directly predict the relative motions between multiple frames of ultrasound images without the need for tracking, surpassing the accuracy of existing methods.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108843"},"PeriodicalIF":4.9,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143936210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-positive contrastive learning-based cross-attention model for T cell receptor–antigen binding prediction","authors":"Yi Shuai , Pengcheng Shen , Xianrui Zhang","doi":"10.1016/j.cmpb.2025.108797","DOIUrl":"10.1016/j.cmpb.2025.108797","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>T cells play a vital role in the immune system by recognizing and eliminating infected or cancerous cells, thus driving adaptive immune responses. Their activation is triggered by the binding of T cell receptors (TCRs) to epitopes presented on Major Histocompatibility Complex (MHC) molecules. However, experimentally identifying antigens that could be recognizable by T cells and possess immunogenic properties is resource-intensive, with most candidates proving non-immunogenic, underscoring the need for computational tools to predict peptide-MHC (pMHC) and TCR binding. Despite extensive efforts, accurately predicting TCR-antigen binding pairs remains challenging due to the vast diversity of TCRs.</div></div><div><h3>Methods:</h3><div>In this study, we propose a Contrastive Cross-attention model for TCR (ConTCR) and pMHC binding prediction. Firstly, the pMHC and TCR sequences are transformed into high-level embedding by pretrained encoders as feature representations. Then, we employ the multi-modal cross-attention to combine the features between pMHC sequences and TCR sequences. Next, based on the contrastive learning strategy, we pretrained the backbone of ConTCR to boost the model’s feature extraction ability for pMHC and TCR sequences. Finally, the model is fine-tuned for classification between positive and negative samples.</div></div><div><h3>Results:</h3><div>Based on this advanced strategy, our proposed model could effectively capture the critical information on TCR-pMHC interactions, and the model is visualized by the attention score heatmap for interpretability. ConTCR demonstrates strong generalization in predicting binding specificity for unseen epitopes and diverse TCR repertoires. On independent non-zero-shot test sets, the model achieved AUC-ROC scores of 0.849 and 0.950; on zero-shot test sets, it obtained AUC-ROC scores of 0.830 and 0.938.</div></div><div><h3>Conclusion:</h3><div>Our framework offers a promising solution for improving pMHC-TCR binding prediction and model interpretability. By leveraging the ConTCR model and pMHC-TCR features, we achieve more precise precision than recently advanced models. Overall, ConTCR is a robust tool for predicting pMHC-TCR binding and holds significant promise to advance TCR-based immunotherapies as a valuable artificial intelligence tool. The codes and data used in this study are available at this <span><span>website</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108797"},"PeriodicalIF":4.9,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144069061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuelan Zhang , Yue Che , Chunxiao Li , Xiran Cao , Xiaoxiao Wang , Yidan Zhang , Gaoyang Li , Jing Zhu , Mingyao Luo , Chang Shu
{"title":"Prediction of iliac limb occlusion after endovascular aneurysm repair for abdominal aortic aneurysm by anatomical and near-wall hemodynamic characteristics combining numerical simulation and in vitro experiment","authors":"Xuelan Zhang , Yue Che , Chunxiao Li , Xiran Cao , Xiaoxiao Wang , Yidan Zhang , Gaoyang Li , Jing Zhu , Mingyao Luo , Chang Shu","doi":"10.1016/j.cmpb.2025.108845","DOIUrl":"10.1016/j.cmpb.2025.108845","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Iliac limb occlusion (ILO) is a serious complication of endovascular aneurysm repair (EVAR) for abdominal aortic aneurysm (AAA), and most require timely reintervention. The study aims to explore pathogenesis, risk factors and prediction models of ILO combining anatomical and near-wall hemodynamic characteristics.</div></div><div><h3>Methods</h3><div>75 AAA patients with EVAR (occlusion 25; patency 50) were included. Pre-EVAR and early post-EVAR anatomical (proximal neck angulation, radius, curvature, torsion and tortuosity) and near-wall hemodynamic parameters (time-averaged wall shear stress [TAWSS], oscillatory shear index [OSI] and relative residence time [RRT]) were quantified based on numerical simulation validated through in vitro experiment. The causation of ILO was investigated statistically in three perspectives: differences between patent and occluded patients, effect of EVAR and asymmetries between occluded and patent iliac arteries (IAs). A new parameter, the difference ratio of EVAR (DRE) or IAs (DRI), was proposed to evaluate the degree of changes caused by EVAR and asymmetries between bilateral IAs.</div></div><div><h3>Results</h3><div>The occluded patients had higher TAWSS (<em>p</em> = 0.004) and larger torsion of common IA (<em>p</em> = 0.033) at pre-EVAR than patent patients. At early post-EVAR, OSI and RRT of future occluded IA were significantly higher than patent IA. The difference ratio of pre-EVAR and early post-EVAR RRT (DRE of RRT) on occluded IA was also higher than that on patent IA (<em>p</em> = 0.025). A prediction model for ILO (area under curve = 0.924) was developed combining anatomical and near-wall hemodynamic variables, where DRI of pre-EVAR OSI had the highest odds ratio [OR] of 5.45 (1.77‒16.74, <em>p</em> = 0.003), and pre-EVAR radius of external IA had the lowest OR of 0.06 (0.01‒0.45, <em>p</em> = 0.007).</div></div><div><h3>Conclusions</h3><div>High TAWSS and large torsion at pre-EVAR, and excessive increase in RRT by EVAR (DRE) might induce ILO. Higher asymmetry of pre-EVAR OSI between bilateral IAs (DRI) and smaller radius of pre-EVAR external IA were significantly associated with increased ILO risk. The above findings can provide some theoretical guidance to predict and reduce the risk of ILO.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108845"},"PeriodicalIF":4.9,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piotr M. Szczypiński, Artur Klepaczko, Robert Olbrycht
{"title":"VesselKnife – software for the analysis of tubular structures in biomedical images","authors":"Piotr M. Szczypiński, Artur Klepaczko, Robert Olbrycht","doi":"10.1016/j.cmpb.2025.108817","DOIUrl":"10.1016/j.cmpb.2025.108817","url":null,"abstract":"<div><h3>Background and objective</h3><div>Accurate segmentation and modeling of blood vessels are critical for understanding vascular anatomy and pathology. VesselKnife is specialized software developed to address challenges in radius estimation and 3D reconstruction, particularly for thin vessel segments and complex geometries. This study evaluates the software's accuracy and versatility in medical imaging applications.</div></div><div><h3>Methods</h3><div>VesselKnife incorporates advanced techniques based on multiscale vesselness functions and principal component analysis to estimate vessel radii and characterize lumen orientation. The algorithms were validated using synthetic 3D images of tubular structures with varying radii and noise levels, as well as real medical imaging data from MRI and CT. Quantitative performance metrics, including relative error, precision, and robustness to noise, were assessed.</div></div><div><h3>Results</h3><div>The software achieved high accuracy in synthetic data experiments, with radius estimation errors ranging from 2.3 % to 5.7 % for small vessels and up to 2.9 % for larger vessels. It demonstrated robustness to noise and provided reliable segmentation for thin vessel segments. In real imaging scenarios, VesselKnife was effective in modeling cerebral and renal vasculature. Additionally, the tool was applied to confocal microscopy and scaffold modeling, illustrating its adaptability to diverse imaging modalities.</div></div><div><h3>Conclusions</h3><div>VesselKnife addresses critical gaps in vascular modeling with precise radius estimation and robust performance across varying imaging conditions. The release of a synthetic 3D image database promotes reproducibility and further research. VesselKnife is a reliable tool for researchers and practitioners in biomedical imaging, with potential applications extending beyond vascular studies.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108817"},"PeriodicalIF":4.9,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Declan Rickard , Muhammad Ashad Kabir , Nusrat Homaira
{"title":"Machine learning-based approaches for distinguishing viral and bacterial pneumonia in paediatrics: A scoping review","authors":"Declan Rickard , Muhammad Ashad Kabir , Nusrat Homaira","doi":"10.1016/j.cmpb.2025.108802","DOIUrl":"10.1016/j.cmpb.2025.108802","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Pneumonia is the leading cause of hospitalisation and mortality among children under five, particularly in low-resource settings. Accurate differentiation between viral and bacterial pneumonia is essential for guiding appropriate treatment, yet it remains challenging due to overlapping clinical and radiographic features. Advances in machine learning (ML), particularly deep learning (DL), have shown promise in classifying pneumonia using chest X-ray (CXR) images. This scoping review summarises the evidence on ML techniques for classifying viral and bacterial pneumonia using CXR images in paediatric patients.</div></div><div><h3>Methods:</h3><div>This scoping review was conducted following the Joanna Briggs Institute methodology and the PRISMA-ScR guidelines. A comprehensive search was performed in PubMed, Embase, and Scopus to identify studies involving children (0–18 years) with pneumonia diagnosed through CXR, using ML models for binary or multiclass classification. Data extraction included ML models, dataset characteristics, and performance metrics.</div></div><div><h3>Results:</h3><div>A total of 35 studies, published between 2018 and 2025, were included in this review. Of these, 31 studies used the publicly available Kermany dataset, raising concerns about overfitting and limited generalisability to broader, real-world clinical populations. Most studies (n=33) used convolutional neural networks (CNNs) for pneumonia classification. While many models demonstrated promising performance, significant variability was observed due to differences in methodologies, dataset sizes, and validation strategies, complicating direct comparisons. For binary classification (viral vs bacterial pneumonia), a median accuracy of 92.3% (range: 80.8% to 97.9%) was reported. For multiclass classification (healthy, viral pneumonia, and bacterial pneumonia), the median accuracy was 91.8% (range: 76.8% to 99.7%).</div></div><div><h3>Conclusions:</h3><div>Current evidence is constrained by a predominant reliance on a single dataset and variability in methodologies, which limit the generalisability and clinical applicability of findings. To address these limitations, future research should focus on developing diverse and representative datasets while adhering to standardised reporting guidelines. Such efforts are essential to improve the reliability, reproducibility, and translational potential of machine learning models in clinical settings.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108802"},"PeriodicalIF":4.9,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143931815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty quantification for simulating coronary artery hemodynamics in aneurysms caused by kawasaki disease","authors":"Kieun Choi , Jinyoung Seo , Jongmin Seo","doi":"10.1016/j.cmpb.2025.108834","DOIUrl":"10.1016/j.cmpb.2025.108834","url":null,"abstract":"<div><h3>Background and Objective</h3><div>This study applies an Uncertainty Quantification (UQ) framework to assess the reliability of cardiovascular simulation about coronary artery aneurysms (CAAs) caused by Kawasaki Disease (KD) for advancing clinical decision-making. The objective is to evaluate the impact of uncertainties in hemodynamic metrics, including Wall Shear Stress (WSS), Residence Time (RT), and Fractional Flow Reserve (FFR).</div></div><div><h3>Methods</h3><div>Three patient-specific aorto-coronary anatomic models were used to perform computational fluid dynamics (CFD) simulations. A reduced-order sub-modeling approach was utilized to reduce computational costs. Uncertainties were introduced to input parameters: cardiac output, inflow waveform, in-plane velocity distribution, and intramyocardial pressure. Time-varying signals were perturbed using the Karhunen–Loève expansion. 100 samples per each patient were obtained, assuming standard distributions for input parameters. Sensitivity analysis was conducted to determine the contribution of each parameter to output variability.</div></div><div><h3>Results</h3><div>A 20 % uncertainty in cardiac output and a perturbed inflow waveform with a 7 % process variance caused variability in WSS and RT of 8 % to 35 %. Sensitivity analysis revealed that cardiac output had the most significant impact, contributing over 52 % to output variability, while the inflow waveform contributed 20-30 %. The in-plane velocity distribution influenced WSS and RT by around 10 % but showed varying contributions to FFR —3 % to 27 %. Intramyocardial pressure had a negligible effect.</div></div><div><h3>Conclusions</h3><div>This study is the first to apply UQ to KD-related CAA simulations, driven by clinical needs, with extensive investigations into the uncertain input parameters. The findings highlight cardiac output as the key factor in hemodynamic variability. It emphasizes the need for precise clinical data to enhance simulation-based predictions, particularly in managing CAAs in KD patients.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108834"},"PeriodicalIF":4.9,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fei Shang , Li Huo , Tan Gong , Peipei Wang , Ximin Shi , Xiaoying Tang , Shuai Liu
{"title":"A myocardial reorientation method based on feature point detection for quantitative analysis of PET myocardial perfusion imaging","authors":"Fei Shang , Li Huo , Tan Gong , Peipei Wang , Ximin Shi , Xiaoying Tang , Shuai Liu","doi":"10.1016/j.cmpb.2025.108837","DOIUrl":"10.1016/j.cmpb.2025.108837","url":null,"abstract":"<div><h3>Objective</h3><div>Reorienting cardiac positron emission tomography (PET) images to the transaxial plane is essential for cardiac PET image analysis. This study aims to design a convolutional neural network (CNN) for automatic reorientation and evaluate its generalizability.</div></div><div><h3>Methods</h3><div>An artificial intelligence (AI) method integrating U-Net and the differentiable spatial to numeric transform module (DSNT-U) was proposed to automatically position three feature points (P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub>), with these three points manually located by an experienced radiologist as the reference standard (RS). A second radiologist performed manual location for reproducibility evaluation. The DSNT-U, initially trained and tested on a [<sup>11</sup>C]acetate dataset (training/testing: 40/17), was further compared with a CNN-spatial transformer network (CNN-STN). The network fine-tuned with 4 subjects was tested on a [<sup>13</sup>N]ammonia dataset (<em>n</em> = 30). The performance of the DSNT-U was evaluated in terms of coordinates, volume, and quantitative indexes (pharmacokinetic parameters and total perfusion deficit).</div></div><div><h3>Results</h3><div>The proposed DSNT-U successfully achieved automatic myocardial reorientation for both [<sup>11</sup>C]acetate and [<sup>13</sup>N]ammonia datasets. For the former dataset, the intraclass correlation coefficients (ICCs) between the coordinates predicted by the DSNT-U and the RS exceeded 0.876. The average normalized mean squared error (NMSE) between the short-axis (SA) images obtained through DSNT-U-based reorientation and the reference SA images was 0.051 ± 0.043. For pharmacokinetic parameters, the R² between the DSNT-U and the RS was larger than 0.968. Compared with the CNN-STN, the DSNT-U demonstrated a higher ICC between the estimated rigid transformation parameters and the RS. After fine-tuning on the [<sup>13</sup>N]ammonia dataset, the average NMSE between the SA images reoriented by the DSNT-U and the reference SA images was 0.056 ± 0.046. The ICC between the total perfusion deficit (TPD) values computed from DSNT-U-derived images and the reference values was 0.981. Furthermore, no significant differences were observed in the performance of the DSNT-U prediction among subjects with different genders or varying myocardial perfusion defect (MPD) statuses.</div></div><div><h3>Conclusions</h3><div>The proposed DSNT-U can accurately position P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub> on the [<sup>11</sup>C]acetate dataset. After fine-tuning, the positioning model can be applied to the [<sup>13</sup>N]ammonia perfusion dataset, demonstrating good generalization performance. This method can adapt to data of different genders (with or without MPD) and different tracers, displaying the potential to replace manual operations.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108837"},"PeriodicalIF":4.9,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143936217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI-driven glomerular morphology quantification: a novel pipeline for assessing basement membrane thickness and podocyte foot process effacement in kidney diseases","authors":"Michifumi Yamashita , Natalia Piaseczna , Akira Takahashi , Daisuke Kiyozawa , Narihito Tatsumoto , Shohei Kaneko , Natalia Zurek , Arkadiusz Gertych","doi":"10.1016/j.cmpb.2025.108842","DOIUrl":"10.1016/j.cmpb.2025.108842","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Measuring the thickness of the glomerular basement membrane (GBM) and assessing the percentage of podocyte foot process effacement (%PFPE) are important for diagnosing non-neoplastic kidney diseases. However, when performed manually by nephropathologists using electron microscopy (EM) images, these assessments are hindered by the lack of universally standardized guidelines, leading to technical challenges. We have developed a novel deep learning (DL)-based pipeline which has the potential to reduce human error and enhance the consistency and efficiency of GBMs and %PFPE quantifications.</div></div><div><h3>Methods</h3><div>This study utilized 196 EM images from kidney biopsies (representing 21 different kidney diseases from 83 subjects) which were manually annotated by consensus of 3 nephrologists and 2 nephropathologist providing ground truth (GT) masks of GBMs, podocytes, red blood cells and other glomerular ultrastructures. Of these, 165 images were used to develop two DL models (DeepLabV3+ and U-Net architectures) for EM image segmentation. Subsequently, the models were evaluated on the remaining 31 images and compared for segmentation accuracy, and the predicted GBM and podocyte masks were analyzed by algorithms in the pipeline which automatically measured the corrected harmonic mean of GBM thickness (cmGBM) and estimated the %PFPE. The automated measurements were statistically compared to the corresponding cmGBM measured and %PFPE estimated using the consensus GBM and podocyte GT masks. The goal was to identify differences between measurements provided by these three methods. Statistical evaluations were carried out using the intraclass correlation coefficient (ICC), and the Bland-Altman plots estimating the bias and limits of agreement (LoAs) between the GT and DL mask-based measurements.</div></div><div><h3>Results</h3><div>In the 31 test set images, the DeepLabV3+ model achieved a global accuracy (gACC) of 92.8 % and a weighted intersection over union (wIoU) of 0.869, outperforming the U-Net model, which recorded a gACC of 88.9 % and a wIoU of 0.800. For GBM thickness measurements, the cmGBM derived from DeepLabV3+ masks exhibited excellent agreement with GT-masks based measurements (ICC = 0.991, <em>p</em> < 0.001), whereas the U-Net model showed good agreement (ICC = 0.881, <em>p</em> < 0.001). The %PFPE estimates obtained using the DL-generated podocyte masks were highly consistent with those based on GT, with ICC values of 0.926 and 0.928 for DeepLabV3+ and U-Net, respectively. The Bland-Altman plots revealed a positive bias in the cmGBM and %PFPE obtained from the masks generated by the DeepLabV3+ model, and negative bias in the cmGBM and %PFPE obtained from the masks generated by the U-Net model. However, the DeepLabV3+ masks provided narrower LoA ranges than the U-Net masks for measuring cmGBM.</div></div><div><h3>Conclusions</h3><div>This study highlights the potential of AI to add","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108842"},"PeriodicalIF":4.9,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143931695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahui Mao , Wuchao Li , Xinhuan Sun , Bangkang Fu , Junjie He , Chongzhe Yan , Jianguo Zhu , Zhuxue Zhang , Jiahui Mao , Zhangxin Hong , Qi Tang , Zhen Liu , Pinhao Li , Yan Zhang , Rongpin Wang
{"title":"RPF-Net: A multimodal model for the postoperative UISS risk stratification of non-metastatic ccRCC based on CT and whole-slide images","authors":"Jiahui Mao , Wuchao Li , Xinhuan Sun , Bangkang Fu , Junjie He , Chongzhe Yan , Jianguo Zhu , Zhuxue Zhang , Jiahui Mao , Zhangxin Hong , Qi Tang , Zhen Liu , Pinhao Li , Yan Zhang , Rongpin Wang","doi":"10.1016/j.cmpb.2025.108836","DOIUrl":"10.1016/j.cmpb.2025.108836","url":null,"abstract":"<div><h3>Background and objectives</h3><div>Postoperative non-metastatic clear cell renal cell carcinoma (nccRCC) patients face the risk of tumor recurrence and metastasis. However, prognosis assessment for nccRCC remains time-consuming and subjective. In the current diagnostic landscape, computed tomography (CT) images provide macro-scale anatomical information, and whole-slide images (WSIs) offer micro-scale details that are inaccessible to CT imaging. To address this gap, the study proposes a multimodal approach that leverages both CT and WSI data to develop an automated model for postoperative risk stratification in nccRCC.</div></div><div><h3>Methods</h3><div>This study proposes a multimodal model named the Radiology-Pathology Fusion Network (RPF-Net), which employs self-attention, graph-attention, and dynamic attention fusion mechanisms to integrate CT images and WSIs for classifying nccRCC patients into low-risk and intermediate-high-risk groups per the University of California, Los Angeles, Integrated Staging System (UISS) criteria. The proposed model is divided into three steps. First, the ResNet-50 and 3D ResNet-50 are used as feature extractors to respectively extract representative feature maps from WSIs and CT images. Second, a dual-branch module is designed to extract global and local features of the WSIs. Finally, a multilayer dynamic attention fusion (MDAF) module is developed to facilitate cross-modal feature interaction and predict the risk stratification results.</div></div><div><h3>Results</h3><div>The area under the curve (AUC), accuracy, precision, and F1 Score of the RPF-Net on the internal validation set are 0.949±0.013, 0.894±0.019, 0.895±0.020, and 0.894±0.019, respectively. Furthermore, the RPF-Net shows robust generalization, achieving an AUC of 0.901 on the external validation set and 0.924 on the public dataset.</div></div><div><h3>Conclusions</h3><div>The RPF-Net models the diagnostic process of multimodal data and shows strong generalization and excellent performance. This model may be a potential tool to facilitate clinical risk stratification and management for postoperative nccRCC patients.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"268 ","pages":"Article 108836"},"PeriodicalIF":4.9,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}