M. Sofia Sappia , Chris L. de Korte , Bram van Ginneken , Dean Ninalga , Satoshi Kondo , Satoshi Kasai , Kousuke Hirasawa , Tanya Akumu , Carlos Martín-Isla , Karim Lekadir , Victor M. Campello , Jorge Fabila , Anette Beverdam , Jeroen van Dillen , Chase Neff , Keelin Murphy
{"title":"ACOUSLIC-AI challenge report: Fetal abdominal circumference measurement on blind-sweep ultrasound data from low-income countries","authors":"M. Sofia Sappia , Chris L. de Korte , Bram van Ginneken , Dean Ninalga , Satoshi Kondo , Satoshi Kasai , Kousuke Hirasawa , Tanya Akumu , Carlos Martín-Isla , Karim Lekadir , Victor M. Campello , Jorge Fabila , Anette Beverdam , Jeroen van Dillen , Chase Neff , Keelin Murphy","doi":"10.1016/j.media.2025.103640","DOIUrl":"10.1016/j.media.2025.103640","url":null,"abstract":"<div><div>Fetal growth restriction, affecting up to 10% of pregnancies, is a critical factor contributing to perinatal mortality and morbidity. Ultrasound measurements of the fetal abdominal circumference (AC) are a key aspect of monitoring fetal growth. However, the routine practice of biometric obstetric ultrasounds is limited in low-resource settings due to the high cost of sonography equipment and the scarcity of trained sonographers. To address this issue, we organized the ACOUSLIC-AI (Abdominal Circumference Operator-agnostic UltraSound measurement in Low-Income Countries) challenge to investigate the feasibility of automatically estimating fetal AC from blind-sweep ultrasound scans acquired by novice operators using low-cost devices. Training data, collected from three Public Health Units (PHUs) in Sierra Leone <strong>are</strong> made publicly available. Private validation and test sets, containing data from two PHUs in Tanzania and a European hospital, are provided through the Grand-Challenge platform. All sets were annotated by experienced readers. Sixteen international teams participated in this challenge, with six teams submitting to the Final Test Phase. In this article, we present the results of the three top-performing AI models from the ACOUSLIC-AI challenge, which are publicly accessible. We evaluate their performance in fetal abdomen frame selection, segmentation, abdominal circumference measurement, and compare their performance against clinical standards for fetal AC measurement. Clinical comparisons demonstrated that the limits of agreement (LoA) for A2 in fetal AC measurements are comparable to the interobserver LoA reported in the literature. The algorithms developed as part of the ACOUSLIC-AI challenge provide a benchmark for future algorithms on the selection and segmentation of fetal abdomen frames to further minimize fetal abdominal circumference measurement variability.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103640"},"PeriodicalIF":10.7,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhen Yuan , David Stojanovski , Lei Li , Alberto Gomez , Haran Jogeesvaran , Esther Puyol-Antón , Baba Inusa , Andrew P. King
{"title":"DeepSPV: A deep learning pipeline for 3D spleen volume estimation from 2D ultrasound images","authors":"Zhen Yuan , David Stojanovski , Lei Li , Alberto Gomez , Haran Jogeesvaran , Esther Puyol-Antón , Baba Inusa , Andrew P. King","doi":"10.1016/j.media.2025.103671","DOIUrl":"10.1016/j.media.2025.103671","url":null,"abstract":"<div><div>Splenomegaly, the enlargement of the spleen, is an important clinical indicator for various associated medical conditions, such as sickle cell disease (SCD). Spleen length measured from 2D ultrasound is the most widely used metric for characterising spleen size. However, it is still considered a surrogate measure, and spleen volume remains the gold standard for assessing spleen size. Accurate spleen volume measurement typically requires 3D imaging modalities, such as computed tomography or magnetic resonance imaging, but these are not widely available, especially in the Global South which has a high prevalence of SCD. In this work, we introduce a deep learning pipeline, DeepSPV, for precise spleen volume estimation from single or dual 2D ultrasound images. The pipeline involves a segmentation network and a variational autoencoder for learning low-dimensional representations from the estimated segmentations. We investigate three approaches for spleen volume estimation and our best model achieves 86.62%/92.5% mean relative volume accuracy (MRVA) under single-view/dual-view settings, surpassing the performance of human experts. In addition, the pipeline can provide confidence intervals for the volume estimates as well as offering benefits in terms of interpretability, which further support clinicians in decision-making when identifying splenomegaly. We evaluate the full pipeline using a highly realistic synthetic dataset generated by a diffusion model, achieving an overall MRVA of 83.0% from a single 2D ultrasound image. Our proposed DeepSPV is the first work to use deep learning to estimate 3D spleen volume from 2D ultrasound images and can be seamlessly integrated into the current clinical workflow for spleen assessment. We also make our synthetic spleen ultrasound dataset publicly available.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103671"},"PeriodicalIF":10.7,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144305090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mélanie Roschewitz , Fabio De Sousa Ribeiro , Tian Xia , Galvin Khara , Ben Glocker
{"title":"Robust image representations with counterfactual contrastive learning","authors":"Mélanie Roschewitz , Fabio De Sousa Ribeiro , Tian Xia , Galvin Khara , Ben Glocker","doi":"10.1016/j.media.2025.103668","DOIUrl":"10.1016/j.media.2025.103668","url":null,"abstract":"<div><div>Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive contrastive pairs should preserve semantic meaning while discarding unwanted variations related to the data acquisition domain. Traditional contrastive pipelines attempt to simulate domain shifts through pre-defined generic image transformations. However, these do not always mimic realistic and relevant domain variations for medical imaging, such as scanner differences. To tackle this issue, we herein introduce <em>counterfactual contrastive learning</em>, a novel framework leveraging recent advances in causal image synthesis to create contrastive positive pairs that faithfully capture relevant domain variations. Our method, evaluated across five datasets encompassing both chest radiography and mammography data, for two established contrastive objectives (SimCLR and DINO-v2), outperforms standard contrastive learning in terms of robustness to acquisition shift. Notably, counterfactual contrastive learning achieves superior downstream performance on both in-distribution and external datasets, especially for images acquired with scanners under-represented in the training set. Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning reducing subgroup disparities across biological sex.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103668"},"PeriodicalIF":10.7,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144279085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iris N. Vos , Ynte M. Ruigrok , Edwin Bennink , Mireille R.E. Velthuis , Barbara Paic , Maud E.H. Ophelders , Myrthe A.D. Buser , Bas H.M. van der Velden , Geng Chen , Matthieu Coupet , Félix Dumais , Adrian Galdran , Zhang Junyi , Wei Liu , Ting Ma , Madhu S. Nair , Mathieu Naudin , Preena K.P. , Keerthi A.S. Pillai , Pengcheng Shi , Hugo J. Kuijf
{"title":"Evaluation of techniques for automated classification and artery quantification of the circle of Willis on TOF-MRA images: The CROWN challenge","authors":"Iris N. Vos , Ynte M. Ruigrok , Edwin Bennink , Mireille R.E. Velthuis , Barbara Paic , Maud E.H. Ophelders , Myrthe A.D. Buser , Bas H.M. van der Velden , Geng Chen , Matthieu Coupet , Félix Dumais , Adrian Galdran , Zhang Junyi , Wei Liu , Ting Ma , Madhu S. Nair , Mathieu Naudin , Preena K.P. , Keerthi A.S. Pillai , Pengcheng Shi , Hugo J. Kuijf","doi":"10.1016/j.media.2025.103650","DOIUrl":"10.1016/j.media.2025.103650","url":null,"abstract":"<div><div>Assessing risk factors for intracranial aneurysm (IA) development on images is crucial for early detection of high-risk cases. IAs often form at bifurcations within the circle of Willis (CoW), but manual assessment of these arteries is both time-consuming and susceptible to inconsistencies. Previous studies on imaging markers for IA development lack sufficient evidence for clinical implications, highlighting the need for automated methods to assess CoW morphology. No systematic approach currently exists to identify the best methodological strategies. To address this, we organized a scientific challenge to compare various techniques against a clinical reference standard. Participants were tasked with (1) automated classification of CoW anatomical variants and (2) automated prediction of CoW artery diameters and bifurcation angles. We provided 300 TOF-MRA scans for training and another 300 for testing, all manually annotated. Submissions were evaluated using balanced accuracy, mean absolute error, and Pearson correlation coefficient metrics. This paper provides a detailed analysis of the results from six participating teams. The findings show that various methods may be suitable for automated CoW assessment, but that these need further improvement to meet clinical standards. The challenge remains open for future submissions, offering a benchmark for new techniques.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103650"},"PeriodicalIF":10.7,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causal inertia proximal Mamba network for magnetic resonance image reconstruction","authors":"Tong Hou , Hongqing Zhu , Bingcang Huang , Kai Chen , Zhong Zheng","doi":"10.1016/j.media.2025.103649","DOIUrl":"10.1016/j.media.2025.103649","url":null,"abstract":"<div><div>Accurate and rapid Magnetic Resonance Imaging (MRI) is critical for clinical diagnosis. However, different sampling strategies and datasets act as confounding factors, significantly impacting the quality of image reconstruction. While existing methods can capture correlations between data during the imaging process, they overlook the deeper associations rooted in causal relationships. To address this issue, this paper proposes a Causal Inertial Proximal Mamba Network (CIPM-Net) to achieve robust and efficient MRI reconstruction. Specifically, we present a causal inertial proximal iterative algorithm that eliminates biases caused by confounding factors using a causal model, improving the ability of the algorithm to identify spurious correlations. Furthermore, to achieve an effective balance between global perception and computational efficiency during the reconstruction process, the proposed algorithm is extended into a Mamba-based network. At the channel level, a Causal Channel Mamba (CCM) module is introduced to suppress irrelevant channel features, thereby enhancing the quality of the reconstructed images. For spatial-domain, a novel Causal Spatial Mamba (CSM) module is designed to adaptively assign varying weights to pixel points, optimizing the extraction of spatial information. Additionally, to account for causal relationships in the frequency domain, a Causal Frequency Mamba (CFM) module is introduced to capture complex pathological features. Extensive experiments with different acceleration factors demonstrate the superiority of the proposed method. The results show that, compared to the baseline, CIPM-Net achieves average improvements of 5.69 dB in PSNR and 0.058 in SSIM on the IXI dataset, and 7 dB in PSNR and 0.072 in SSIM on the clinical dataset.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103649"},"PeriodicalIF":10.7,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Müller , Lisa M. Koch , Hendrik P.A. Lensch , Philipp Berens
{"title":"Disentangling representations of retinal images with generative models","authors":"Sarah Müller , Lisa M. Koch , Hendrik P.A. Lensch , Philipp Berens","doi":"10.1016/j.media.2025.103628","DOIUrl":"10.1016/j.media.2025.103628","url":null,"abstract":"<div><div>Retinal fundus images play a crucial role in the early detection of eye diseases. However, the impact of technical factors on these images can pose challenges for reliable AI applications in ophthalmology. For example, large fundus cohorts are often confounded by factors such as camera type, bearing the risk of learning shortcuts rather than the causal relationships behind the image generation process. Here, we introduce a population model for retinal fundus images that effectively disentangles patient attributes from camera effects, enabling controllable and highly realistic image generation. To achieve this, we propose a disentanglement loss based on distance correlation. Through qualitative and quantitative analyses, we show that our models encode desired information in disentangled subspaces and enable controllable image generation based on the learned subspaces, demonstrating the effectiveness of our disentanglement loss. The project code is publicly available at <span><span>https://github.com/berenslab/disentangling-retinal-images</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103628"},"PeriodicalIF":10.7,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144365878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino
{"title":"A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods","authors":"Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino","doi":"10.1016/j.media.2025.103665","DOIUrl":"10.1016/j.media.2025.103665","url":null,"abstract":"<div><div>Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103665"},"PeriodicalIF":10.7,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwanyoung Kim , Yujin Oh , Sangjoon Park , Hwa Kyung Byun , Joongyo Lee , Jin Sung Kim , Yong Bae Kim , Jong Chul Ye
{"title":"End-to-end breast cancer radiotherapy planning via LMMs with consistency embedding","authors":"Kwanyoung Kim , Yujin Oh , Sangjoon Park , Hwa Kyung Byun , Joongyo Lee , Jin Sung Kim , Yong Bae Kim , Jong Chul Ye","doi":"10.1016/j.media.2025.103646","DOIUrl":"10.1016/j.media.2025.103646","url":null,"abstract":"<div><div>Recent advances in AI foundation models have significant potential for lightening the clinical workload by mimicking the comprehensive and multi-faceted approaches used by medical professionals. In the field of radiation oncology, the integration of multiple modalities holds great importance, so the opportunity of foundational model is abundant. Inspired by this, here we present RO-LMM, a multi-purpose, comprehensive large multimodal model (LMM) tailored for the field of radiation oncology. This model effectively manages a series of tasks within the clinical workflow, including clinical context summarization, radiotherapy strategy suggestion, and plan-guided target volume segmentation by leveraging the capabilities of LMM. In particular, to perform consecutive clinical tasks without error accumulation, we present a novel Consistency Embedding Fine-Tuning (CEFTune) technique, which boosts LMM’s robustness to noisy inputs while preserving the consistency of handling clean inputs. We further extend this concept to LMM-driven segmentation framework, leading to a novel Consistency Embedding Segmentation (CESEG) techniques. Experimental results including multi-center validation confirm that our RO-LMM with CEFTune and CESEG results in promising performance for multiple clinical tasks with generalization capabilities.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103646"},"PeriodicalIF":10.7,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Yuan , Vinkle Srivastav , Tong Yu , Joël L. Lavanchy , Jacques Marescaux , Pietro Mascagni , Nassir Navab , Nicolas Padoy
{"title":"Learning multi-modal representations by watching hundreds of surgical video lectures","authors":"Kun Yuan , Vinkle Srivastav , Tong Yu , Joël L. Lavanchy , Jacques Marescaux , Pietro Mascagni , Nassir Navab , Nicolas Padoy","doi":"10.1016/j.media.2025.103644","DOIUrl":"10.1016/j.media.2025.103644","url":null,"abstract":"<div><div>Recent advancements in surgical computer vision applications have been driven by vision-only models, which do not explicitly integrate the rich semantics of language into their design. These methods rely on manually annotated surgical videos to predict a fixed set of object categories, limiting their generalizability to unseen surgical procedures and downstream tasks. In this work, we put forward the idea that the surgical video lectures available through open surgical e-learning platforms can provide effective vision and language supervisory signals for multi-modal representation learning without relying on manual annotations. We address the surgery-specific linguistic challenges present in surgical video lectures by employing multiple complementary automatic speech recognition systems to generate text transcriptions. We then present a novel method, <em>SurgVLP</em> — Surgical Vision Language Pre-training, for multi-modal representation learning. <em>SurgVLP</em> constructs a new contrastive learning objective to align video clip embeddings with the corresponding multiple text embeddings by bringing them together within a joint latent space. To effectively demonstrate the representational capability of the learned joint latent space, we introduce several vision-and-language surgical tasks and evaluate various vision-only tasks specific to surgery, e.g., surgical tool, phase, and triplet recognition. Extensive experiments across diverse surgical procedures and tasks demonstrate that the multi-modal representations learned by <em>SurgVLP</em> exhibit strong transferability and adaptability in surgical video analysis. Furthermore, our zero-shot evaluations highlight <em>SurgVLP</em>’s potential as a general-purpose foundation model for surgical workflow analysis, reducing the reliance on extensive manual annotations for downstream tasks, and facilitating adaptation methods such as few-shot learning to build a scalable and data-efficient solution for various downstream surgical applications. The code is available at <span><span>https://github.com/CAMMA-public/SurgVLP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103644"},"PeriodicalIF":10.7,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Grandits , Karli Gillette , Gernot Plank , Simone Pezzuto
{"title":"Accurate and efficient cardiac digital twin from surface ECGs: Insights into identifiability of ventricular conduction system","authors":"Thomas Grandits , Karli Gillette , Gernot Plank , Simone Pezzuto","doi":"10.1016/j.media.2025.103641","DOIUrl":"10.1016/j.media.2025.103641","url":null,"abstract":"<div><div>Digital twins for cardiac electrophysiology are an enabling technology for precision cardiology. Current forward models are advanced enough to simulate the cardiac electric activity under different pathophysiological conditions and accurately replicate clinical signals like torso electrocardiograms (ECGs). In this work, we address the challenge of matching subject-specific QRS complexes using anatomically accurate, physiologically grounded cardiac digital twins. By fitting the initial conditions of a cardiac propagation model, our non-invasive method predicts activation patterns during sinus rhythm. For the first time, we demonstrate that distinct activation maps can generate identical surface ECGs. To address this non-uniqueness, we introduce a physiological prior based on the distribution of Purkinje-muscle junctions. Additionally, we develop a digital twin ensemble for probabilistic inference of cardiac activation. Our approach marks a significant advancement in the calibration of cardiac digital twins and enhances their credibility for clinical application.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103641"},"PeriodicalIF":10.7,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144221851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}