Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Establishment of an intelligent analysis system for clinical image features of melanonychia based on deep learning image segmentation 基于深度学习图像分割的黑甲癣临床图像特征智能分析系统的建立
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-06 DOI: 10.1016/j.compmedimag.2025.102543
WengIoi Mio , Ruiyue Chen , Jiayan Lv , Sien Mai , Yanqing Chen , Mengwen He , Xin Zhang , Han Ma
{"title":"Establishment of an intelligent analysis system for clinical image features of melanonychia based on deep learning image segmentation","authors":"WengIoi Mio ,&nbsp;Ruiyue Chen ,&nbsp;Jiayan Lv ,&nbsp;Sien Mai ,&nbsp;Yanqing Chen ,&nbsp;Mengwen He ,&nbsp;Xin Zhang ,&nbsp;Han Ma","doi":"10.1016/j.compmedimag.2025.102543","DOIUrl":"10.1016/j.compmedimag.2025.102543","url":null,"abstract":"<div><div>Melanonychia, a condition that can be indicative of malignant melanoma, presents a significant challenge in early diagnosis due to the invasive nature and equipment dependency of traditional diagnostic methods such as nail biopsy and dermatoscope imaging. This study introduces, non-invasive intelligent analysis and follow-up system for melanonychia using smartphone imagery, harnessing the power of deep learning to facilitate early detection and monitoring. Through a cross-sectional study, Research Group developed a comprehensive nail image dataset and a two-stage model comprising a YOLOv8-based nail detection system and a UNet-based image segmentation system. The integrated YOLOv8 and UNet model achieved high accuracy and reliability in detecting and segmenting melanonychia lesions, with performance metrics such as F1, Dice, Specificity and Sensitivity significantly outperforming traditional methods and closely aligning with dermatoscopic assessments. This Artificial Intelligence-based (AI-based) system offers a user-friendly, accessible tool for both clinicians and patients, enhancing the ability to diagnose and monitor melanonychia, and holds the potential to improve early detection and treatment outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102543"},"PeriodicalIF":5.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation 具有自监督学习和合成域自适应的扩散刺激CT-US配准模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-05-06 DOI: 10.1016/j.compmedimag.2025.102562
Shangxuan Li , Biao Jia , Weiming Huang , Xiaobo Zhang , Wu Zhou , Cheng Wang , Gaojun Teng
{"title":"A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation","authors":"Shangxuan Li ,&nbsp;Biao Jia ,&nbsp;Weiming Huang ,&nbsp;Xiaobo Zhang ,&nbsp;Wu Zhou ,&nbsp;Cheng Wang ,&nbsp;Gaojun Teng","doi":"10.1016/j.compmedimag.2025.102562","DOIUrl":"10.1016/j.compmedimag.2025.102562","url":null,"abstract":"<div><div>In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102562"},"PeriodicalIF":5.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143921566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced glaucoma classification through advanced segmentation by integrating cup-to-disc ratio and neuro-retinal rim features 通过整合杯盘比和神经视网膜边缘特征的高级分割增强青光眼分类
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-28 DOI: 10.1016/j.compmedimag.2025.102559
Rabia Pannu , Muhammad Zubair , Muhammad Owais , Shoaib Hassan , Muhammad Umair , Syed Muhammad Usman , Mousa Ahmed Albashrawi , Irfan Hussain
{"title":"Enhanced glaucoma classification through advanced segmentation by integrating cup-to-disc ratio and neuro-retinal rim features","authors":"Rabia Pannu ,&nbsp;Muhammad Zubair ,&nbsp;Muhammad Owais ,&nbsp;Shoaib Hassan ,&nbsp;Muhammad Umair ,&nbsp;Syed Muhammad Usman ,&nbsp;Mousa Ahmed Albashrawi ,&nbsp;Irfan Hussain","doi":"10.1016/j.compmedimag.2025.102559","DOIUrl":"10.1016/j.compmedimag.2025.102559","url":null,"abstract":"<div><div>Glaucoma is a progressive eye condition caused by high intraocular fluid pressure, damaging the optic nerve, leading to gradual, irreversible vision loss, often without noticeable symptoms. Subtle signs like mild eye redness, slightly blurred vision, and eye pain may go unnoticed, earning it the nickname “silent thief of sight.” Its prevalence is rising with an aging population, driven by increased life expectancy. Most computer-aided diagnosis (CAD) systems rely on the cup-to-disc ratio (CDR) for glaucoma diagnosis. This study introduces a novel approach by integrating CDR with the neuro-retinal rim ratio (NRR), which quantifies rim thickness within the optic disc (OD). NRR enhances diagnostic accuracy by capturing additional optic nerve head changes, such as rim thinning and tissue loss, which were overlooked using CDR alone. A modified ResUNet architecture for OD and optic cup (OC) segmentation, combining residual learning and U-Net to capture spatial context for semantic segmentation. For OC segmentation, the model achieved Dice Coefficient (DC) scores of 0.942 and 0.872 and Intersection over Union (IoU) values of 0.891 and 0.773 for DRISHTI-GS and RIM-ONE, respectively. For OD segmentation, the model achieved DC of 0.972 and 0.950 and IoU values of 0.945 and 0.940 for DRISHTI-GS and RIM-ONE, respectively. External evaluation on ORIGA and REFUGE confirmed the model’s robustness and generalizability. CDR and NRR were calculated from segmentation masks and used to train an SVM with a radial basis function, classifying the eyes as healthy or glaucomatous. The model achieved accuracies of 0.969 on DRISHTI-GS and 0.977 on RIM-ONE.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102559"},"PeriodicalIF":5.4,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143894871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel diagnosis method utilizing MDBO-SVM and imaging genetics for Alzheimer's disease 一种基于mbo - svm和影像遗传学的阿尔茨海默病诊断方法
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-27 DOI: 10.1016/j.compmedimag.2025.102542
Yu Xin , Jinhua Sheng , Qiao Zhang , Yan Song , Luyun Wang , Ze Yang
{"title":"A novel diagnosis method utilizing MDBO-SVM and imaging genetics for Alzheimer's disease","authors":"Yu Xin ,&nbsp;Jinhua Sheng ,&nbsp;Qiao Zhang ,&nbsp;Yan Song ,&nbsp;Luyun Wang ,&nbsp;Ze Yang","doi":"10.1016/j.compmedimag.2025.102542","DOIUrl":"10.1016/j.compmedimag.2025.102542","url":null,"abstract":"<div><div>Alzheimer's disease (AD) is the most common neurodegenerative disorder, yet its underlying mechanisms remain elusive. Early and accurate diagnosis is crucial for timely intervention and disease management. In this paper, a multi-strategy improved dung beetle optimizer (MDBO) was proposed to establish a new framework for AD diagnosis. The unique aspect of this algorithm lies in its integration of the Osprey Optimization Algorithm, Lévy flight, and adaptive t-distribution. This combination endows MDBO with superior global search capabilities and the ability to avoid local optima. Then, we presented a novel fitness function for integrating imaging genetics data. In experiments, MDBO demonstrated outstanding performance on the CEC2017 benchmark functions, proving its effectiveness in optimization problems. Furthermore, it was used to classify individuals with AD, mild cognitive impairment (MCI), and control normal (CN) using limited features. In the multi-classification of CN, MCI, and AD, the algorithm achieved excellent results, with an average accuracy of 81.7 % and a best accuracy of 92 %. Overall, the proposed MDBO algorithm provides a more comprehensive and efficient diagnostic tool, offering new possibilities for early intervention and disease progression control.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102542"},"PeriodicalIF":5.4,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICA-SAMv7: Internal carotid artery segmentation with coarse to fine network ICA-SAMv7:颈内动脉粗到细网络分割
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-25 DOI: 10.1016/j.compmedimag.2025.102555
Xiaotian Yan , Yuting Guo , Ziyi Pei , Xinyu Zhang , Jinghao Li , Zitao Zhou , Lifang Liang , Shuai Li , Peng Lun , Aimin Hao
{"title":"ICA-SAMv7: Internal carotid artery segmentation with coarse to fine network","authors":"Xiaotian Yan ,&nbsp;Yuting Guo ,&nbsp;Ziyi Pei ,&nbsp;Xinyu Zhang ,&nbsp;Jinghao Li ,&nbsp;Zitao Zhou ,&nbsp;Lifang Liang ,&nbsp;Shuai Li ,&nbsp;Peng Lun ,&nbsp;Aimin Hao","doi":"10.1016/j.compmedimag.2025.102555","DOIUrl":"10.1016/j.compmedimag.2025.102555","url":null,"abstract":"<div><div>Internal carotid artery (ICA) stenosis is a life-threatening occult disease. Using Computed Tomography Angiography (CTA) to examine vascular lesions such as calcified and non-calcified plaques in cases of carotid artery stenosis is a necessary clinical step in formulating the correct treatment plan. Segment Anything Model (SAM) has shown promising performance in image segmentation tasks, but it performs poorly for carotid artery segmentation. Due to the small size of the calcification and the overlapping between the lumen and calcification, these challenges lead to issues such as mislabeling and boundary fragmentation, as well as high training costs. To address these problems, we propose a two-stage Carotid Artery lesion segmentation method called ICA-SAMv7, which performs coarse and fine segmentation based on the YOLOv7 and SAM model. Specifically, in the first stage (ICA-YOLOv7), we utilize YOLOv7 for coarse vessel recognition, introducing connectivity enhancement to improve accuracy and achieve precise localization of small target carotid artery. In the second stage (ICA-SAM), we enhance SAM through data augmentation and an efficient parameter fine-tuning strategy. This improves the segmentation accuracy of fine-grained lesions in blood vessels while saving training costs. Ultimately, the accuracy of lesion segmentation under the SAM model was increased from the original 48.62% to 83.69%. Extensive comparative experiments have demonstrated the outstanding performance of our algorithm. Our codes can be found at <span><span>https://github.com/BessiePei/ICA-SAMv7</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102555"},"PeriodicalIF":5.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of ultrafast DCE-MRI for identifying benign and malignant breast lesions 超快速DCE-MRI鉴别乳腺良恶性病变的可行性
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-25 DOI: 10.1016/j.compmedimag.2025.102561
Junhui Huang , Yongsheng Ao , Lan Mu , Jierui Zhao , Hongliang Chen , Long Yang , Bingyu Yao , Shuheng Zhang , Shimin Yang , Greta S.P. Mok , Ke Zhang , Zhanli Hu , Ye Li , Dong Liang , Xin Liu , Hairong Zheng , Lihua Qiu , Na Zhang
{"title":"Feasibility of ultrafast DCE-MRI for identifying benign and malignant breast lesions","authors":"Junhui Huang ,&nbsp;Yongsheng Ao ,&nbsp;Lan Mu ,&nbsp;Jierui Zhao ,&nbsp;Hongliang Chen ,&nbsp;Long Yang ,&nbsp;Bingyu Yao ,&nbsp;Shuheng Zhang ,&nbsp;Shimin Yang ,&nbsp;Greta S.P. Mok ,&nbsp;Ke Zhang ,&nbsp;Zhanli Hu ,&nbsp;Ye Li ,&nbsp;Dong Liang ,&nbsp;Xin Liu ,&nbsp;Hairong Zheng ,&nbsp;Lihua Qiu ,&nbsp;Na Zhang","doi":"10.1016/j.compmedimag.2025.102561","DOIUrl":"10.1016/j.compmedimag.2025.102561","url":null,"abstract":"<div><h3>Objectives</h3><div>This study investigates whether the use of ultrafast DCE-MRI immediately after contrast injection is an alternative to conventional DCE-MRI for diagnosing benign and malignant breast lesions.</div></div><div><h3>Methods</h3><div>A total of 86 female patients were included in this prospective study. Each patient underwent both ultrafast DCE-MRI and conventional DCE-MRI before surgery. The Mann-Whitney U test was used to analyze whether there were significant differences in DCE-MRI parameters between benign and malignant breast lesions (p &lt; 0.05) for both conventional and ultrafast methods. The AUC value of the ROC curve was used to assess the diagnostic performance for each parameter and to determine their critical values, sensitivity, and specificity using the maximum Youden index. Delong test and support vector machine (SVM) were also used to evaluate the performance of conventional DCE-MRI and ultrafast DCE-MRI in identifying benign and malignant breast lesions.</div></div><div><h3>Results</h3><div>A total of 99 lesion areas (21 benign and 78 malignant lesions) were found in the 86 patients. Conventional DCE-MRI has only two semiquantitative parameters that can identify benign and malignant (Wash-out and SER, p &lt; 0.05), whereas ultrafast DCE-MRI is the one that can identify benign and malignant for all semiquantitative parameters except clearance, and there are more semiquantitative parameters that can be used to identify benign and malignant by ultrafast DCE-MRI than by conventional DCE-MRI. The ultrafast DCE-MRI parameters (AUC=0.8626) had a greater AUC than the conventional DCE-MRI parameters (AUC=0.7552) for distinguishing between benign and malignant breast lesions.</div></div><div><h3>Conclusions</h3><div>Ultrafast DCE-MRI is effective in identifying benign and malignant breast lesions at the early stage of contrast injection; therefore, it is feasible to use Ultrafast DCE-MRI instead of conventional DCE-MRI to diagnose benign and malignant breast tumor lesions.</div></div><div><h3>Advances in knowledge</h3><div>We evaluated ultrafast DCE-MRI's quantitative parameters in distinguishing benign from malignant breast lesions. SVM was used to assess the performance of conventional and ultrafast DCE-MRI in breast malignancy discrimination.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102561"},"PeriodicalIF":5.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143882245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-guided slice-description interaction for multimodal brain disease diagnosis based on 3D image and radiological report 基于三维图像和放射学报告的多模态脑疾病解剖引导的切片描述交互诊断
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-25 DOI: 10.1016/j.compmedimag.2025.102556
Xin Gao , Meihui Zhang , Junjie Li , Shanbo Zhao , Zhizheng Zhuo , Liying Qu , Jinyuan Weng , Li Chai , Yunyun Duan , Chuyang Ye , Yaou Liu
{"title":"Anatomy-guided slice-description interaction for multimodal brain disease diagnosis based on 3D image and radiological report","authors":"Xin Gao ,&nbsp;Meihui Zhang ,&nbsp;Junjie Li ,&nbsp;Shanbo Zhao ,&nbsp;Zhizheng Zhuo ,&nbsp;Liying Qu ,&nbsp;Jinyuan Weng ,&nbsp;Li Chai ,&nbsp;Yunyun Duan ,&nbsp;Chuyang Ye ,&nbsp;Yaou Liu","doi":"10.1016/j.compmedimag.2025.102556","DOIUrl":"10.1016/j.compmedimag.2025.102556","url":null,"abstract":"<div><div>Accurate brain disease diagnosis based on radiological images is desired in clinical practice as it can facilitate early intervention and reduce the risk of damage. However, existing unimodal image-based models struggle to process high-dimensional 3D brain imaging data effectively. Multimodal disease diagnosis approaches based on medical images and corresponding radiological reports achieved promising progress with the development of vision-language models. However, most multimodal methods handle 2D images and cannot be directly applied to brain disease diagnosis that uses 3D images. Therefore, in this work we develop a multimodal brain disease diagnosis model that takes 3D brain images and their radiological reports as input. Motivated by the fact that radiologists scroll through image slices and write important descriptions into the report accordingly, we propose a slice-description cross-modality interaction mechanism to realize fine-grained multimodal data interaction. Moreover, since previous medical research has demonstrated potential correlation between anatomical location of anomalies and diagnosis results, we further explore the use of brain anatomical prior knowledge to improve the multimodal interaction. Based on the report description, the prior knowledge filters the image information by suppressing irrelevant regions and enhancing relevant slices. Our method was validated with two brain disease diagnosis tasks. The results indicate that our model outperforms competing unimodal and multimodal methods for brain disease diagnosis. In particular, it has yielded an average accuracy improvement of 15.87% and 7.39% compared with the image-based and multimodal competing methods, respectively.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102556"},"PeriodicalIF":5.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143882246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal recurrent intervention for cross-modal cardiac image segmentation 交叉模态心脏图像分割的因果复发干预
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-21 DOI: 10.1016/j.compmedimag.2025.102549
Qixin Lin , Saidi Guo , Heye Zhang , Zhifan Gao
{"title":"Causal recurrent intervention for cross-modal cardiac image segmentation","authors":"Qixin Lin ,&nbsp;Saidi Guo ,&nbsp;Heye Zhang ,&nbsp;Zhifan Gao","doi":"10.1016/j.compmedimag.2025.102549","DOIUrl":"10.1016/j.compmedimag.2025.102549","url":null,"abstract":"<div><div>Cross-modal cardiac image segmentation is essential for cardiac disease analysis. In diagnosis, it enables clinicians to obtain more precise information about cardiac structure or function for potential signs by leveraging specific imaging modalities. For instance, cardiovascular pathologies such as myocardial infarction and congenital heart defects require precise cross-modal characterization to guide clinical decisions. The growing adoption of cross-modal segmentation in clinical research underscores its technical value, yet annotating cardiac images with multiple slices is time-consuming and labor-intensive, making it difficult to meet clinical and deep learning demands. To reduce the need for labels, cross-modal approaches could leverage general knowledge from multiple modalities. However, implementing a cross-modal method remains challenging due to cross-domain confounding. This challenge arises from the intricate effects of modality and view alterations between images, including inconsistent high-dimensional features. The confounding complicates the causality between the observation (image) and the prediction (label), thereby weakening the domain-invariant representation. Existing disentanglement methods face difficulties in addressing the confounding due to the insufficient depiction of the relationship between latent factors. This paper proposes the causal recurrent intervention (CRI) method to overcome the above challenge. It establishes a structural causal model that allows individual domains to maintain causal consistency through interventions. The CRI method integrates diverse high-dimensional variations into a singular causal relationship by embedding image slices into a sequence. This approach further distinguishes stable and dynamic factors from the sequence, subsequently separating the stable factor into modal and view factors and establishing causal connections between them. It then learns the dynamic factor and the view factor from the observation to obtain the label. Experimental results on cross-modal cardiac images of 1697 examples show that the CRI method delivers promising and productive cross-modal cardiac image segmentation performance.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102549"},"PeriodicalIF":5.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143863557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-stage color fundus image registration via Keypoint Refinement and Confidence-Guided Estimation 基于关键点细化和置信度估计的两阶段彩色眼底图像配准
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-19 DOI: 10.1016/j.compmedimag.2025.102554
Feihong Yan, Yubin Xu, Yiran Kong, Weihang Zhang, Huiqi Li
{"title":"Two-stage color fundus image registration via Keypoint Refinement and Confidence-Guided Estimation","authors":"Feihong Yan,&nbsp;Yubin Xu,&nbsp;Yiran Kong,&nbsp;Weihang Zhang,&nbsp;Huiqi Li","doi":"10.1016/j.compmedimag.2025.102554","DOIUrl":"10.1016/j.compmedimag.2025.102554","url":null,"abstract":"<div><div>Color fundus images are widely used for diagnosing diseases such as Glaucoma, Cataracts, and Diabetic Retinopathy. The registration of color fundus images is crucial for assessing changes in fundus appearance to determine disease progression. In this paper, a novel two-stage framework is proposed for conducting end-to-end color fundus image registration without requiring any training or annotation. In the first stage, a pre-trained SuperPoint and SuperGlue network are used to obtain matching pairs, which are then refined based on their slopes. In the second stage, Confidence-Guided Transformation Matrix Estimation (CGTME) is proposed to estimate the final perspective transformation matrix. Specifically, a variant of 4-point algorithm, namely CG 4-point algorithm, is designed to adjust the contribution of matched points in estimating the perspective transformation matrix based on the confidence of SuperGlue. Then, we select the matched points with high confidence for the final estimation of transformation matrix. Experimental results show that our proposed algorithm can improve the registration performance effectively.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102554"},"PeriodicalIF":5.4,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tailored self-supervised pretraining improves brain MRI diagnostic models 量身定制的自我监督预训练改进了脑MRI诊断模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-04-17 DOI: 10.1016/j.compmedimag.2025.102560
Xinhao Huang , Zihao Wang , Weichen Zhou , Kexin Yang , Kaihua Wen , Haiguang Liu , Shoujin Huang , Mengye Lyu
{"title":"Tailored self-supervised pretraining improves brain MRI diagnostic models","authors":"Xinhao Huang ,&nbsp;Zihao Wang ,&nbsp;Weichen Zhou ,&nbsp;Kexin Yang ,&nbsp;Kaihua Wen ,&nbsp;Haiguang Liu ,&nbsp;Shoujin Huang ,&nbsp;Mengye Lyu","doi":"10.1016/j.compmedimag.2025.102560","DOIUrl":"10.1016/j.compmedimag.2025.102560","url":null,"abstract":"<div><div>Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102560"},"PeriodicalIF":5.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信