Junde Wu , Ziyue Wang , Mingxuan Hong , Wei Ji , Huazhu Fu , Yanwu Xu , Min Xu , Yueming Jin
{"title":"Medical SAM adapter: Adapting segment anything model for medical image segmentation","authors":"Junde Wu , Ziyue Wang , Mingxuan Hong , Wei Ji , Huazhu Fu , Yanwu Xu , Min Xu , Yueming Jin","doi":"10.1016/j.media.2025.103547","DOIUrl":"10.1016/j.media.2025.103547","url":null,"abstract":"<div><div>The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in various segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation due to the lack of medical-specific knowledge. This raises the question of how to enhance SAM’s segmentation capability for medical images. We propose the Medical SAM Adapter (Med-SA), which is one of the first methods to integrate SAM into medical image segmentation. Med-SA uses a light yet effective adaptation technique instead of fine-tuning the SAM model, incorporating domain-specific medical knowledge into the segmentation model. We also propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. Comprehensive evaluation experiments on 17 medical image segmentation tasks across various modalities demonstrate the superior performance of Med-SA while updating only 2% of the SAM parameters (13M). Our code is released at <span><span>https://github.com/KidsWithTokens/Medical-SAM-Adapter</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103547"},"PeriodicalIF":10.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingyu Ai , Bin Huang , Fang Chen , Liu Shi , Binxuan Li , Shaoyu Wang , Qiegen Liu
{"title":"RED: Residual estimation diffusion for low-dose PET sinogram reconstruction","authors":"Xingyu Ai , Bin Huang , Fang Chen , Liu Shi , Binxuan Li , Shaoyu Wang , Qiegen Liu","doi":"10.1016/j.media.2025.103558","DOIUrl":"10.1016/j.media.2025.103558","url":null,"abstract":"<div><div>Recent advances in diffusion models have demonstrated exceptional performance in generative tasks across various fields. In positron emission tomography (PET), the reduction in tracer dose leads to information loss in sinograms. Using diffusion models to reconstruct missing information can improve imaging quality. Traditional diffusion models effectively use Gaussian noise for image reconstructions. However, in low-dose PET reconstruction, Gaussian noise can worsen the already sparse data by introducing artifacts and inconsistencies. To address this issue, we propose a diffusion model named residual estimation diffusion (RED). From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process, respectively sets the low-dose and full-dose sinograms as the starting point and endpoint of reconstruction. This mechanism helps preserve the original information in the low-dose sinogram, thereby enhancing reconstruction reliability. From the perspective of data consistency, RED introduces a drift correction strategy to reduce accumulated prediction errors during the reverse process. Calibrating the intermediate results of reverse iterations helps maintain the data consistency and enhances the stability of reconstruction process. In the experiments, RED achieved the best performance across all metrics. Specifically, the PSNR metric showed improvements of 2.75, 5.45, and 8.08 dB in DRF4, 20, and 100 respectively, compared to traditional methods. The code is available at: <span><span>https://github.com/yqx7150/RED</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103558"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueyu Liu , Guangze Shi , Rui Wang , Yexin Lai , Jianan Zhang , Weixia Han , Min Lei , Ming Li , Xiaoshuang Zhou , Yongfei Wu , Chen Wang , Wen Zheng
{"title":"Segment Any Tissue: One-shot reference guided training-free automatic point prompting for medical image segmentation","authors":"Xueyu Liu , Guangze Shi , Rui Wang , Yexin Lai , Jianan Zhang , Weixia Han , Min Lei , Ming Li , Xiaoshuang Zhou , Yongfei Wu , Chen Wang , Wen Zheng","doi":"10.1016/j.media.2025.103550","DOIUrl":"10.1016/j.media.2025.103550","url":null,"abstract":"<div><div>Medical image segmentation frequently encounters high annotation costs and challenges in task adaptation. While visual foundation models have shown promise in natural image segmentation, automatically generating high-quality prompts for class-agnostic segmentation of medical images remains a significant practical challenge. To address these challenges, we present Segment Any Tissue (SAT), an innovative, training-free framework designed to automatically prompt the class-agnostic visual foundation model for the segmentation of medical images with only a one-shot reference. SAT leverages the robust feature-matching capabilities of a pretrained foundation model to construct distance metrics in the feature space. By integrating these with distance metrics in the physical space, SAT establishes a dual-space cyclic prompt engineering approach for automatic prompt generation, optimization, and evaluation. Subsequently, SAT utilizes a class-agnostic foundation segmentation model with the generated prompt scheme to obtain segmentation results. Additionally, we extend the one-shot framework by incorporating multiple reference images to construct an ensemble SAT, further enhancing segmentation performance. SAT has been validated on six public and private medical segmentation tasks, capturing both macroscopic and microscopic perspectives across multiple dimensions. In the ablation experiments, automatic prompt selection enabled SAT to effectively handle tissues of various sizes, while also validating the effectiveness of each component. The comparative experiments show that SAT is comparable to, or even exceeds, some fully supervised methods. It also demonstrates superior performance compared to existing one-shot methods. In summary, SAT requires only a single pixel-level annotated reference image to perform tissue segmentation across various medical images in a training-free manner. This not only significantly reduces the annotation costs of applying foundational models to the medical field but also enhances task transferability, providing a foundation for the clinical application of intelligent medicine. Our source code is available at <span><span>https://github.com/SnowRain510/Segment-Any-Tissue</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103550"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TriDeNT : Triple deep network training for privileged knowledge distillation in histopathology","authors":"Lucas Farndale , Robert Insall , Ke Yuan","doi":"10.1016/j.media.2025.103479","DOIUrl":"10.1016/j.media.2025.103479","url":null,"abstract":"<div><div>Computational pathology models rarely utilise data that will not be available for inference. This means most models cannot learn from highly informative data such as additional immunohistochemical (IHC) stains and spatial transcriptomics. We present TriDeNT <figure><img></figure>, a novel self-supervised method for utilising privileged data that is not available during inference to improve performance. We demonstrate the efficacy of this method for a range of different paired data including immunohistochemistry, spatial transcriptomics and expert nuclei annotations. In all settings, TriDeNT <figure><img></figure> outperforms other state-of-the-art methods in downstream tasks, with observed improvements of up to 101%. Furthermore, we provide qualitative and quantitative measurements of the features learned by these models and how they differ from baselines. TriDeNT <figure><img></figure> offers a novel method to distil knowledge from scarce or costly data during training, to create significantly better models for routine inputs.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103479"},"PeriodicalIF":10.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruobing Huang , Yinyu Ye , Ao Chang , Han Huang , Zijie Zheng , Long Tan , Guoxue Tang , Man Luo , Xiuwen Yi , Pan Liu , Jiayi Wu , Baoming Luo , Dong Ni
{"title":"Subtyping breast lesions via collective intelligence based long-tailed recognition in ultrasound","authors":"Ruobing Huang , Yinyu Ye , Ao Chang , Han Huang , Zijie Zheng , Long Tan , Guoxue Tang , Man Luo , Xiuwen Yi , Pan Liu , Jiayi Wu , Baoming Luo , Dong Ni","doi":"10.1016/j.media.2025.103548","DOIUrl":"10.1016/j.media.2025.103548","url":null,"abstract":"<div><div>Breast lesions display a wide spectrum of histological subtypes. Recognizing these subtypes is vital for optimizing patient care and facilitating tailored treatment strategies compared to a simplistic binary classification of malignancy. However, this task relies on invasive biopsy tests, which carry inherent risks and can lead to over-diagnosis, unnecessary expenses, and pain for patients. To avoid this, we propose to infer lesion subtypes from ultrasound images directly. Meanwhile, the incidence rates of different subtypes exhibit a skewed long-tailed distribution that presents substantial challenges for effective recognition. Inspired by collective intelligence in clinical diagnosis to handle complex or rare cases, we proposed a framework–CoDE–to amalgamate diverse expertise of different backbones to bolster robustness across varying scenarios for automated lesion subtyping. It utilizes dual-level balanced individual supervision to fully exploit prior knowledge while considering class imbalance. It is also equipped with a batch-based online competitive distillation module to stimulate dynamic knowledge exchange. Experimental results demonstrate that the model surpassed the state-of-the-art approaches by more than 7.22% in F1-score facing a challenging breast dataset with an imbalance ratio as high as 47.9:1.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103548"},"PeriodicalIF":10.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hulin Kuang , Yahui Wang , Xianzhen Tan , Jialin Yang , Jiarui Sun , Jin Liu , Wu Qiu , Jingyang Zhang , Jiulou Zhang , Chunfeng Yang , Jianxin Wang , Yang Chen
{"title":"LW-CTrans: A lightweight hybrid network of CNN and Transformer for 3D medical image segmentation","authors":"Hulin Kuang , Yahui Wang , Xianzhen Tan , Jialin Yang , Jiarui Sun , Jin Liu , Wu Qiu , Jingyang Zhang , Jiulou Zhang , Chunfeng Yang , Jianxin Wang , Yang Chen","doi":"10.1016/j.media.2025.103545","DOIUrl":"10.1016/j.media.2025.103545","url":null,"abstract":"<div><div>Recent models based on convolutional neural network (CNN) and Transformer have achieved the promising performance for 3D medical image segmentation. However, these methods cannot segment small targets well even when equipping large parameters. Therefore, We design a novel lightweight hybrid network that combines the strengths of CNN and Transformers (LW-CTrans) and can boost the global and local representation capability at different stages. Specifically, we first design a dynamic stem that can accommodate images of various resolutions. In the first stage of the hybrid encoder, to capture local features with fewer parameters, we propose a multi-path convolution (MPConv) block. In the middle stages of the hybrid encoder, to learn global and local features meantime, we propose a multi-view pooling based Transformer (MVPFormer) which projects the 3D feature map onto three 2D subspaces to deal with small objects, and use the MPConv block for enhancing local representation learning. In the final stage, to mostly capture global features, only the proposed MVPFormer is used. Finally, to reduce the parameters of the decoder, we propose a multi-stage feature fusion module. Extensive experiments on 3 public datasets for three tasks: stroke lesion segmentation, pancreas cancer segmentation and brain tumor segmentation, show that the proposed LW-CTrans achieves Dices of 62.35±19.51%, 64.69±20.58% and 83.75±15.77% on the 3 datasets, respectively, outperforming 16 state-of-the-art methods, and the numbers of parameters (2.08M, 2.14M and 2.21M on 3 datasets, respectively) are smaller than the non-lightweight 3D methods and close to the lightweight methods. Besides, LW-CTrans also achieves the best performance for small lesion segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103545"},"PeriodicalIF":10.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143642991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGAug: Multimodal Geometric Augmentation in Latent Spaces of Image Deformations","authors":"Tonmoy Hossain, Miaomiao Zhang","doi":"10.1016/j.media.2025.103540","DOIUrl":"10.1016/j.media.2025.103540","url":null,"abstract":"<div><div>Geometric transformations have been widely used to augment the size of training images. Existing methods often assume a unimodal distribution of the underlying transformations between images, which limits their power when data with multimodal distributions occur. In this paper, we propose a novel model, <em>Multimodal Geometric Augmentation</em> (MGAug), that for the first time generates augmenting transformations in a multimodal latent space of geometric deformations. To achieve this, we first develop a deep network that embeds the learning of latent geometric spaces of diffeomorphic transformations (a.k.a. diffeomorphisms) in a variational autoencoder (VAE). A mixture of multivariate Gaussians is formulated in the tangent space of diffeomorphisms and serves as a prior to approximate the hidden distribution of image transformations. We then augment the original training dataset by deforming images using randomly sampled transformations from the learned multimodal latent space of VAE. To validate the efficiency of our model, we jointly learn the augmentation strategy with two distinct domain-specific tasks: multi-class classification on both synthetic 2D and real 3D brain MRIs, and segmentation on real 3D brain MRIs dataset. We also compare MGAug with state-of-the-art transformation-based image augmentation algorithms. Experimental results show that our proposed approach outperforms all baselines by significantly improved prediction accuracy. Our code is publicly available at <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103540"},"PeriodicalIF":10.7,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Qian , Mingyang Han , Liuhong Zhu , Zi Wang , Feiqiang Guan , Yucheng Guo , Dan Ruan , Yi Guo , Taishan Kang , Jianzhong Lin , Chengyan Wang , Merry Mani , Mathews Jacob , Meijin Lin , Di Guo , Xiaobo Qu , Jianjun Zhou
{"title":"Fast and ultra-high shot diffusion MRI image reconstruction with self-adaptive Hankel subspace","authors":"Chen Qian , Mingyang Han , Liuhong Zhu , Zi Wang , Feiqiang Guan , Yucheng Guo , Dan Ruan , Yi Guo , Taishan Kang , Jianzhong Lin , Chengyan Wang , Merry Mani , Mathews Jacob , Meijin Lin , Di Guo , Xiaobo Qu , Jianjun Zhou","doi":"10.1016/j.media.2025.103546","DOIUrl":"10.1016/j.media.2025.103546","url":null,"abstract":"<div><div>Multi-shot interleaved echo planar imaging is widely employed for acquiring high-resolution and low-distortion diffusion weighted images (DWI). These DWI images, however, are easily affected by motion artifacts induced by inter-shot phase variations which could be removed by enforcing the low-rankness of a huge 2D block Hankel matrix of the k-space. Successful applications have been evidenced on 4∼8 shots DWI but failure was observed on ultra-high shots, e.g. 10∼12 shots, limiting the extension to higher-resolution DWI. Moreover, the 2D Hankel matrix reconstruction is very time-consuming. Here, we propose to accelerate the reconstruction through decomposing this huge 2D matrix into small 1D lOw-raNk HAnkel (DONA) matrices from every k-space readout line. This extension encounters another problem of variant low-rankness across the k-space. To address this issue, we propose to separate signal subspaces of 1D Hankel matrices into the strong and uncertain ones. The former is pre-estimated from an initial image to reduce the degree of freedom in reconstruction. The latter protects image details in reconstruction by avoiding the overshadowing on small singular values. This method is called DONA with self-adapTive subspacE estimation (DONATE). In vivo results show that DONATE can not only accomplish 4-shot reconstruction in 10 s but also the reconstruction of 12 shots with 10 times faster computation. Besides, DONATE shows superiority on low-distortion spine DWI reconstruction and subjective image quality evaluation in terms of blind scoring by 4 radiologists.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103546"},"PeriodicalIF":10.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143675553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep learning approach to multi-fiber parameter estimation and uncertainty quantification in diffusion MRI","authors":"William Consagra , Lipeng Ning , Yogesh Rathi","doi":"10.1016/j.media.2025.103537","DOIUrl":"10.1016/j.media.2025.103537","url":null,"abstract":"<div><div>Diffusion MRI (dMRI) is the primary imaging modality used to study brain microstructure <em>in vivo</em>. Reliable and computationally efficient parameter inference for common dMRI biophysical models is a challenging inverse problem, due to factors such as variable dimensionalities (reflecting the unknown number of distinct white matter fiber populations in a voxel), low signal-to-noise ratios, and non-linear forward models. These challenges have led many existing methods to use biologically implausible simplified models to stabilize estimation, for instance, assuming shared microstructure across all fiber populations within a voxel. In this work, we introduce a novel sequential method for multi-fiber parameter inference that decomposes the task into a series of manageable subproblems. These subproblems are solved using deep neural networks tailored to problem-specific structure and symmetry, and trained via simulation. The resulting inference procedure is largely amortized, enabling scalable parameter estimation and uncertainty quantification across all model parameters. Simulation studies and real imaging data analysis using the Human Connectome Project (HCP) demonstrate the advantages of our method over standard alternatives. In the case of the standard model of diffusion, our results show that under HCP-like acquisition schemes, estimates for extra-cellular parallel diffusivity are highly uncertain, while those for the intra-cellular volume fraction can be estimated with relatively high precision.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103537"},"PeriodicalIF":10.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143670260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyun Yang , Maria A. Woodward , Leslie M. Niziol , Mercy Pawar , N. Venkatesh Prajna , Anusha Krishnamoorthy , Yiqing Wang , Ming-Chen Lu , Suvitha Selvaraj , Sina Farsiu
{"title":"Self-knowledge distillation-empowered directional connectivity transformer for microbial keratitis biomarkers segmentation on slit-lamp photography","authors":"Ziyun Yang , Maria A. Woodward , Leslie M. Niziol , Mercy Pawar , N. Venkatesh Prajna , Anusha Krishnamoorthy , Yiqing Wang , Ming-Chen Lu , Suvitha Selvaraj , Sina Farsiu","doi":"10.1016/j.media.2025.103533","DOIUrl":"10.1016/j.media.2025.103533","url":null,"abstract":"<div><div>The lack of standardized, objective tools for measuring biomarker morphology poses a significant obstacle to managing Microbial Keratitis (MK). Previous studies have demonstrated that robust segmentation benefits MK diagnosis, management, and estimation of visual outcomes. However, despite exciting advances, current methods cannot accurately detect biomarker boundaries and differentiate the overlapped regions in challenging cases. In this work, we propose a novel self-knowledge distillation-empowered directional connectivity transformer, called SDCTrans. We utilize the directional connectivity modeling framework to improve biomarker boundary detection. The transformer backbone and the hierarchical self-knowledge distillation scheme in this framework enhance directional representation learning. We also propose an efficient segmentation head design to effectively segment overlapping regions. This is the first work that successfully incorporates directional connectivity modeling with a transformer. SDCTrans trained and tested with a new large-scale MK dataset accurately and robustly segments crucial biomarkers in three types of slit lamp biomicroscopy images. Through comprehensive experiments, we demonstrated the superiority of the proposed SDCTrans over current state-of-the-art models. We also show that our SDCTrans matches, if not outperforms, the performance of expert human graders in MK biomarker identification and visual acuity outcome estimation. Experiments on skin lesion images are also included as an illustrative example of SDCTrans’ utility in other segmentation tasks. The new MK dataset and codes are available at <span><span>https://github.com/Zyun-Y/SDCTrans</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103533"},"PeriodicalIF":10.7,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143674379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}