Feihong Yan, Yubin Xu, Yiran Kong, Weihang Zhang, Huiqi Li
{"title":"Two-stage color fundus image registration via Keypoint Refinement and Confidence-Guided Estimation","authors":"Feihong Yan, Yubin Xu, Yiran Kong, Weihang Zhang, Huiqi Li","doi":"10.1016/j.compmedimag.2025.102554","DOIUrl":"10.1016/j.compmedimag.2025.102554","url":null,"abstract":"<div><div>Color fundus images are widely used for diagnosing diseases such as Glaucoma, Cataracts, and Diabetic Retinopathy. The registration of color fundus images is crucial for assessing changes in fundus appearance to determine disease progression. In this paper, a novel two-stage framework is proposed for conducting end-to-end color fundus image registration without requiring any training or annotation. In the first stage, a pre-trained SuperPoint and SuperGlue network are used to obtain matching pairs, which are then refined based on their slopes. In the second stage, Confidence-Guided Transformation Matrix Estimation (CGTME) is proposed to estimate the final perspective transformation matrix. Specifically, a variant of 4-point algorithm, namely CG 4-point algorithm, is designed to adjust the contribution of matched points in estimating the perspective transformation matrix based on the confidence of SuperGlue. Then, we select the matched points with high confidence for the final estimation of transformation matrix. Experimental results show that our proposed algorithm can improve the registration performance effectively.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102554"},"PeriodicalIF":5.4,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinhao Huang , Zihao Wang , Weichen Zhou , Kexin Yang , Kaihua Wen , Haiguang Liu , Shoujin Huang , Mengye Lyu
{"title":"Tailored self-supervised pretraining improves brain MRI diagnostic models","authors":"Xinhao Huang , Zihao Wang , Weichen Zhou , Kexin Yang , Kaihua Wen , Haiguang Liu , Shoujin Huang , Mengye Lyu","doi":"10.1016/j.compmedimag.2025.102560","DOIUrl":"10.1016/j.compmedimag.2025.102560","url":null,"abstract":"<div><div>Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102560"},"PeriodicalIF":5.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guo-Ren Xia , Tengfei Wang , Jun Xu , Xiaoyang Li , Hongzhi Wang , Stephen T.C. Wong , Hai Li
{"title":"A novel population-characteristic weighted sparse model for accurate respiratory motion prediction in CT-guided lung cancer interventions","authors":"Guo-Ren Xia , Tengfei Wang , Jun Xu , Xiaoyang Li , Hongzhi Wang , Stephen T.C. Wong , Hai Li","doi":"10.1016/j.compmedimag.2025.102557","DOIUrl":"10.1016/j.compmedimag.2025.102557","url":null,"abstract":"<div><div>Accurate tracking of lung nodule movement is a critical challenge for image-guided interventions. Current approaches typically rely on respiratory motion modeling to optimize diagnosis and treatment. Population-based motion models predict lung movement in real time by extracting common features of lung motion from the group-level imaging data, but they usually overlook individual differences. Conversely, patient-specific models require patient-specific four-dimensional computed tomography (4D CT), which increases radiation damage. This study introduces a novel Population-Characteristic Weighted Sparse (PCWS) model. PCWS combines population-level motion characteristics with patient-specific data to accurately predict lung movement, eliminating the need for 4D CT acquisition. Sparse manifold clustering is employed to identify a subpopulation exhibiting motion patterns similar to those of the target patient. The respiratory motion field for the specific patient is then approximated using a sparse linear combination of motion data from this subpopulation. Experimental results demonstrate that the PCWS model achieves an average lung estimation error of 0.20 ± 0.15 mm, validating its accuracy. Meanwhile, the PCWS model outperforms three other advanced models in prediction accuracy, effectively combining the strengths of both population and patient-specific models. To evaluate the reproducibility of the PCWS model, two additional datasets from different clinical centers were used. The results confirmed its accuracy and repeatability across various evaluation criteria, further validating its superior performance. Future research will focus on applying the PCWS model to image-guided percutaneous lung biopsy and radiation therapy, aiming to enhance procedural precision and clinical outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102557"},"PeriodicalIF":5.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Chen , Jun Dang , Yuanyuan Wang , Longhao Li , Hongjian Gao , Qingshu Li , Tao Zhang , Xiangzhi Bai
{"title":"Self-supervised network predicting neoadjuvant chemoradiotherapy response to locally advanced rectal cancer patients","authors":"Qian Chen , Jun Dang , Yuanyuan Wang , Longhao Li , Hongjian Gao , Qingshu Li , Tao Zhang , Xiangzhi Bai","doi":"10.1016/j.compmedimag.2025.102552","DOIUrl":"10.1016/j.compmedimag.2025.102552","url":null,"abstract":"<div><div>Radiographic imaging is a non-invasive technique of considerable importance for evaluating tumor treatment response. However, redundancy in CT data and the lack of labeled data make it challenging to accurately assess the response of locally advanced rectal cancer (LARC) patients to neoadjuvant chemoradiotherapy (nCRT) using current imaging indicators. In this study, we propose a novel learning framework to automatically predict the response of LARC patients to nCRT. Specifically, we develop a deep learning network called the Expand Intensive Attention Network (EIA-Net), which enhances the network’s feature extraction capability through cascaded 3D convolutions and coordinate attention. Instance-oriented collaborative self-supervised learning (IOC-SSL) is proposed to leverage unlabeled data for training, reducing the reliance on labeled data. In a dataset consisting of 1,575 volumes, the proposed method achieves an AUC score of 0.8562. The dataset includes two distinct parts: the self-supervised dataset containing 1,394 volumes and the supervised dataset comprising 195 volumes. Analysis of the lifetime predictions reveals that patients with pathological complete response (pCR) predicted by EIA-Net exhibit better overall survival (OS) compared to non-pCR patients with LARC. The retrospective study demonstrates that imaging-based pCR prediction for patients with low rectal cancer can assist clinicians in making informed decisions regarding the need for Miles operation, thereby improving the likelihood of anal preservation, with an AUC of 0.8222. These results underscore the potential of our method to enhance clinical decision-making, offering a promising tool for personalized treatment and improved patient outcomes in LARC management.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102552"},"PeriodicalIF":5.4,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sikha O.K. , Meritxell Riera-Marín , Adrian Galdran , Javier García López , Júlia Rodríguez-Comas , Gemma Piella , Miguel A. González Ballester
{"title":"Uncertainty-aware segmentation quality prediction via deep learning Bayesian Modeling: Comprehensive evaluation and interpretation on skin cancer and liver segmentation","authors":"Sikha O.K. , Meritxell Riera-Marín , Adrian Galdran , Javier García López , Júlia Rodríguez-Comas , Gemma Piella , Miguel A. González Ballester","doi":"10.1016/j.compmedimag.2025.102547","DOIUrl":"10.1016/j.compmedimag.2025.102547","url":null,"abstract":"<div><div>Image segmentation is a critical step in computational biomedical image analysis, typically evaluated using metrics like the Dice coefficient during training and validation. However, in clinical settings without manual annotations, assessing segmentation quality becomes challenging, and models lacking reliability indicators face adoption barriers. To address this gap, we propose a novel framework for predicting segmentation quality without requiring ground truth annotations during test time. Our approach introduces two complementary frameworks: one leveraging predicted segmentation and uncertainty maps, and another integrating the original input image, uncertainty maps, and predicted segmentation maps. We present Bayesian adaptations of two benchmark segmentation models—SwinUNet and Feature Pyramid Network with ResNet50—using Monte Carlo Dropout, Ensemble, and Test Time Augmentation to quantify uncertainty. We evaluate four uncertainty estimates—confidence map, entropy, mutual information, and expected pairwise Kullback–Leibler divergence—on 2D skin lesion and 3D liver segmentation datasets, analyzing their correlation with segmentation quality metrics. Our framework achieves an R<sup>2</sup> score of 93.25 and Pearson correlation of 96.58 on the HAM10000 dataset, outperforming previous segmentation quality assessment methods. For 3D liver segmentation, Test Time Augmentation with entropy achieves an R<sup>2</sup> score of 85.03 and a Pearson correlation of 65.02, demonstrating cross-modality robustness. Additionally, we propose an aggregation strategy that combines multiple uncertainty estimates into a single score per image, offering a more robust and comprehensive assessment of segmentation quality compared to evaluating each measure independently. The proposed uncertainty-aware segmentation quality prediction network is interpreted using gradient-based methods such as Grad-CAM and feature embedding analysis through UMAP. These techniques provide insights into the model’s behavior and reliability, helping to assess the impact of incorporating uncertainty into the segmentation quality prediction pipeline. The code is available at: <span><span>https://github.com/sikha2552/Uncertainty-Aware-Segmentation-Quality-Prediction-Bayesian-Modeling-with-Comprehensive-Evaluation-</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102547"},"PeriodicalIF":5.4,"publicationDate":"2025-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Fakhfakh , Laurent Sarry , Patrick Clarysse
{"title":"HALSR-Net: Improving CNN Segmentation of Cardiac Left Ventricle MRI with Hybrid Attention and Latent Space Reconstruction","authors":"Mohamed Fakhfakh , Laurent Sarry , Patrick Clarysse","doi":"10.1016/j.compmedimag.2025.102546","DOIUrl":"10.1016/j.compmedimag.2025.102546","url":null,"abstract":"<div><div>Accurate cardiac MRI segmentation is vital for detailed cardiac analysis, yet the manual process is labor-intensive and prone to variability. Despite advancements in MRI technology, there remains a significant need for automated methods that can reliably and efficiently segment cardiac structures. This paper introduces HALSR-Net, a novel multi-level segmentation architecture designed to improve the accuracy and reproducibility of cardiac segmentation from Cine-MRI acquisitions, focusing on the left ventricle (LV). The methodology consists of two main phases: first, the extraction of the region of interest (ROI) using a regression model that accurately predicts the location of a bounding box around the LV; second, the semantic segmentation step based on HALSR-Net architecture. This architecture incorporates a Hybrid Attention Pooling Module (HAPM) that merges attention and pooling mechanisms to enhance feature extraction and capture contextual information. Additionally, a reconstruction module leverages latent space features to further improve segmentation accuracy. Experiments conducted on an in-house clinical dataset and two public datasets (ACDC and LVQuan19) demonstrate that HALSR-Net outperforms state-of-the-art architectures, achieving up to 98% accuracy and F1-score for the segmentation of the LV cavity and myocardium. The proposed approach effectively addresses the limitations of existing methods, offering a more accurate and robust solution for cardiac MRI segmentation, thereby likely to improve cardiac function analysis and patient care.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102546"},"PeriodicalIF":5.4,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143834336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mumu Aktar, Donatella Tampieri, Yiming Xiao, Hassan Rivaz, Marta Kersten-Oertel
{"title":"CASCADE-FSL: Few-shot learning for collateral evaluation in ischemic stroke","authors":"Mumu Aktar, Donatella Tampieri, Yiming Xiao, Hassan Rivaz, Marta Kersten-Oertel","doi":"10.1016/j.compmedimag.2025.102550","DOIUrl":"10.1016/j.compmedimag.2025.102550","url":null,"abstract":"<div><div>Assessing collateral circulation is essential in determining the best treatment for ischemic stroke patients as good collaterals lead to different treatment options, i.e., thrombectomy, whereas poor collaterals can adversely affect the treatment by leading to excess bleeding and eventually death. To reduce inter- and intra-rater variability and save time in radiologist assessments, computer-aided methods, mainly using deep neural networks, have gained popularity. The current literature demonstrates effectiveness when using balanced and extensive datasets in deep learning; however, such data sets are scarce for stroke, and the number of data samples for poor collateral cases is often limited compared to those for good collaterals. We propose a novel approach called CASCADE-FSL to distinguish poor collaterals effectively. Using a small, unbalanced data set, we employ a few-shot learning approach for training using a 2D ResNet-50 as a backbone and designating good and intermediate cases as two normal classes. We identify poor collaterals as anomalies in comparison to the normal classes. Our novel approach achieves an overall accuracy, sensitivity, and specificity of 0.88, 0.88, and 0.89, respectively, demonstrating its effectiveness in addressing the imbalanced dataset challenge and accurately identifying poor collateral circulation cases.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102550"},"PeriodicalIF":5.4,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingxuan Wang , Jiali Cai , Wei Tang , Ivan Dudurych , Marcel van Tuinen , Rozemarijn Vliegenthart , Peter van Ooijen
{"title":"A comparison of an integrated and image-only deep learning model for predicting the disappearance of indeterminate pulmonary nodules","authors":"Jingxuan Wang , Jiali Cai , Wei Tang , Ivan Dudurych , Marcel van Tuinen , Rozemarijn Vliegenthart , Peter van Ooijen","doi":"10.1016/j.compmedimag.2025.102553","DOIUrl":"10.1016/j.compmedimag.2025.102553","url":null,"abstract":"<div><h3>Background</h3><div>Indeterminate pulmonary nodules (IPNs) require follow-up CT to assess potential growth; however, benign nodules may disappear. Accurately predicting whether IPNs will resolve is a challenge for radiologists. Therefore, we aim to utilize deep-learning (DL) methods to predict the disappearance of IPNs.</div></div><div><h3>Material and methods</h3><div>This retrospective study utilized data from the Dutch-Belgian Randomized Lung Cancer Screening Trial (NELSON) and Imaging in Lifelines (ImaLife) cohort. Participants underwent follow-up CT to determine the evolution of baseline IPNs. The NELSON data was used for model training. External validation was performed in ImaLife. We developed integrated DL-based models that incorporated CT images and demographic data (age, sex, smoking status, and pack years). We compared the performance of integrated methods with those limited to CT images only and calculated sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). From a clinical perspective, ensuring high specificity is critical, as it minimizes false predictions of non-resolving nodules that should be monitored for evolution on follow-up CTs. Feature importance was calculated using SHapley Additive exPlanations (SHAP) values.</div></div><div><h3>Results</h3><div>The training dataset included 840 IPNs (134 resolving) in 672 participants. The external validation dataset included 111 IPNs (46 resolving) in 65 participants. On the external validation set, the performance of the integrated model (sensitivity, 0.50; 95 % CI, 0.35–0.65; specificity, 0.91; 95 % CI, 0.80–0.96; AUC, 0.82; 95 % CI, 0.74–0.90) was comparable to that solely trained on CT image (sensitivity, 0.41; 95 % CI, 0.27–0.57; specificity, 0.89; 95 % CI, 0.78–0.95; AUC, 0.78; 95 % CI, 0.69–0.86; P = 0.39). The top 10 most important features were all image related.</div></div><div><h3>Conclusion</h3><div>Deep learning-based models can predict the disappearance of IPNs with high specificity. Integrated models using CT scans and clinical data had comparable performance to those using only CT images.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102553"},"PeriodicalIF":5.4,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143834337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Xue , Jingyang Zhang , Lei Ma , Yixuan Li , Huizhong Ji , Tonglong Ren , Zhanming Hu , Meirong Ren , Zhili Zhang , Enqing Dong
{"title":"Semi-supervised temporal attention network for lung 4D CT ventilation estimation","authors":"Peng Xue , Jingyang Zhang , Lei Ma , Yixuan Li , Huizhong Ji , Tonglong Ren , Zhanming Hu , Meirong Ren , Zhili Zhang , Enqing Dong","doi":"10.1016/j.compmedimag.2025.102551","DOIUrl":"10.1016/j.compmedimag.2025.102551","url":null,"abstract":"<div><div>Computed tomography (CT)-derived ventilation estimation, also known as CT ventilation imaging (CTVI), is emerging as a potentially crucial tool for designing functional avoidance radiotherapy treatment plans and evaluating therapy responses. However, most conventional CTVI methods are highly dependent on deformation fields from image registration to track volume variations, making them susceptible to registration errors and resulting in low estimation accuracy. In addition, existing deep learning-based CTVI methods typically have the issue of requiring a large amount of labeled data and cannot fully utilize temporal characteristics of 4D CT images. To address these issues, we propose a semi-supervised temporal attention (S<sup>2</sup>TA) network for lung 4D CT ventilation estimation. Specifically, the semi-supervised learning framework involves a teacher model for generating pseudo-labels from unlabeled 4D CT images, to train a student model that takes both labeled and unlabeled 4D CT images as input. The teacher model is updated as the moving average of the instantly trained student, to prevent it from being abruptly impacted by incorrect pseudo-labels. Furthermore, to fully exploit the temporal information of 4D CT images, a temporal attention architecture is designed to effectively capture the temporal relationships across multiple phases in 4D CT image sequence. Extensive experiments on three publicly available thoracic 4D CT datasets show that our proposed method can achieve higher estimation accuracy than state-of-the-art methods, which could potentially be used for lung functional avoidance radiotherapy and treatment response modeling.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102551"},"PeriodicalIF":5.4,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143820730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolong Zhu , Wenjian Li , Weihang Zhang , Jing Liu , Yue Qi , Qiuju Deng , Huiqi Li
{"title":"A software for quantitative measurement of vessel parameters in fundus images","authors":"Xiaolong Zhu , Wenjian Li , Weihang Zhang , Jing Liu , Yue Qi , Qiuju Deng , Huiqi Li","doi":"10.1016/j.compmedimag.2025.102548","DOIUrl":"10.1016/j.compmedimag.2025.102548","url":null,"abstract":"<div><div>Retinal vessel is a unique structure that allows non-invasive observation of the microcirculatory system. Its pathological features and abnormal structural alterations are associated with cardiovascular and systemic diseases. Especially the abnormalities in caliber features, histology features, and geometric structure of retinal vessels are indicative of these diseases. However, the complex distribution and imperceptible characteristics of vasculature have hindered the measurement of vessel parameters. To this end, we design a new software (Retinal Vessel Parameters Quantitative Measurement Software, RVPQMS) to quantitatively measure the features of retinal vessels. The RVPQMS is designed with the functions of vessel segmentation, landmark localization, vessel tracking, vessel identification and parameter measurement. It enables comprehensive measurement of vessel parameters in both standard zone and whole area. To ensure the accuracy of the software, the algorithms integrated in this software are validated on both private and public datasets, and experimental results demonstrate that it has excellent performance in vessel segmentation, tracking and identification. The RVPQMS software provides thorough and quantitative measurement of retinal vessel parameters, facilitating the study of vessel features for cardiovascular and systemic diseases.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102548"},"PeriodicalIF":5.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143834338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}