Quan Tang, Liming Xu, Yongheng Wang, Bochuan Zheng, Jiancheng Lv, Xianhua Zeng, Weisheng Li
{"title":"Dual-modality visual feature flow for medical report generation.","authors":"Quan Tang, Liming Xu, Yongheng Wang, Bochuan Zheng, Jiancheng Lv, Xianhua Zeng, Weisheng Li","doi":"10.1016/j.media.2024.103413","DOIUrl":"https://doi.org/10.1016/j.media.2024.103413","url":null,"abstract":"<p><p>Medical report generation, a cross-modal task of generating medical text information, aiming to provide professional descriptions of medical images in clinical language. Despite some methods have made progress, there are still some limitations, including insufficient focus on lesion areas, omission of internal edge features, and difficulty in aligning cross-modal data. To address these issues, we propose Dual-Modality Visual Feature Flow (DMVF) for medical report generation. Firstly, we introduce region-level features based on grid-level features to enhance the method's ability to identify lesions and key areas. Then, we enhance two types of feature flows based on their attributes to prevent the loss of key information, respectively. Finally, we align visual mappings from different visual feature with report textual embeddings through a feature fusion module to perform cross-modal learning. Extensive experiments conducted on four benchmark datasets demonstrate that our approach outperforms the state-of-the-art methods in both natural language generation and clinical efficacy metrics.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103413"},"PeriodicalIF":10.7,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun-Ho Kim, Young Noh, Haejoon Lee, Seul Lee, Woo-Ram Kim, Koung Mi Kang, Eung Yeop Kim, Mohammed A. Al-masni, Dong-Hyun Kim
{"title":"Toward automated detection of microbleeds with anatomical scale localization using deep learning","authors":"Jun-Ho Kim, Young Noh, Haejoon Lee, Seul Lee, Woo-Ram Kim, Koung Mi Kang, Eung Yeop Kim, Mohammed A. Al-masni, Dong-Hyun Kim","doi":"10.1016/j.media.2024.103415","DOIUrl":"https://doi.org/10.1016/j.media.2024.103415","url":null,"abstract":"Cerebral Microbleeds (CMBs) are chronic deposits of small blood products in the brain tissues, which have explicit relation to various cerebrovascular diseases depending on their anatomical location, including cognitive decline, intracerebral hemorrhage, and cerebral infarction. However, manual detection of CMBs is a time consuming and error-prone process because of their sparse and tiny structural properties. The detection of CMBs is commonly affected by the presence of many CMB mimics that cause a high false-positive rate (FPR), such as calcifications and pial vessels. This paper proposes a novel 3D deep learning framework that not only detects CMBs but also identifies their anatomical location in the brain (i.e., lobar, deep, and infratentorial regions). For the CMBs detection task, we propose a single end-to-end model by leveraging the 3D U-Net as a backbone with Region Proposal Network (RPN). To significantly reduce the false positives within the same single model, we develop a new scheme, containing Feature Fusion Module (FFM) that detects small candidates utilizing contextual information and Hard Sample Prototype Learning (HSPL) that mines CMB mimics and generates additional loss term called concentration loss using Convolutional Prototype Learning (CPL). For the anatomical localization task, we exploit the 3D U-Net segmentation network to segment anatomical structures of the brain. This task not only identifies to which region the CMBs belong but also eliminates some false positives from the detection task by leveraging anatomical information. We utilize Susceptibility-Weighted Imaging (SWI) and phase images as 3D input to efficiently capture 3D information. The results show that the proposed RPN that utilizes the FFM and HSPL outperforms the baseline RPN and achieves a sensitivity of 94.66 % vs. 93.33 % and an average number of false positives per subject (FP<ce:inf loc=\"post\">avg</ce:inf>) of 0.86 vs. 14.73. Furthermore, the anatomical localization task enhances the detection performance by reducing the FP<ce:inf loc=\"post\">avg</ce:inf> to 0.56 while maintaining the sensitivity of 94.66 %.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Zenk, David Zimmerer, Fabian Isensee, Jeremias Traub, Tobias Norajitra, Paul F Jäger, Klaus Maier-Hein
{"title":"Comparative benchmarking of failure detection methods in medical image segmentation: Unveiling the role of confidence aggregation.","authors":"Maximilian Zenk, David Zimmerer, Fabian Isensee, Jeremias Traub, Tobias Norajitra, Paul F Jäger, Klaus Maier-Hein","doi":"10.1016/j.media.2024.103392","DOIUrl":"https://doi.org/10.1016/j.media.2024.103392","url":null,"abstract":"<p><p>Semantic segmentation is an essential component of medical image analysis research, with recent deep learning algorithms offering out-of-the-box applicability across diverse datasets. Despite these advancements, segmentation failures remain a significant concern for real-world clinical applications, necessitating reliable detection mechanisms. This paper introduces a comprehensive benchmarking framework aimed at evaluating failure detection methodologies within medical image segmentation. Through our analysis, we identify the strengths and limitations of current failure detection metrics, advocating for the risk-coverage analysis as a holistic evaluation approach. Utilizing a collective dataset comprising five public 3D medical image collections, we assess the efficacy of various failure detection strategies under realistic test-time distribution shifts. Our findings highlight the importance of pixel confidence aggregation and we observe superior performance of the pairwise Dice score (Roy et al., 2019) between ensemble predictions, positioning it as a simple and robust baseline for failure detection in medical image segmentation. To promote ongoing research, we make the benchmarking framework available to the community.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103392"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142807657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sam Coveney, Maryam Afzali, Lars Mueller, Irvin Teh, Arka Das, Erica Dall'Armellina, Filip Szczepankiewicz, Derek K Jones, Jurgen E Schneider
{"title":"Outlier detection in cardiac diffusion tensor imaging: Shot rejection or robust fitting?","authors":"Sam Coveney, Maryam Afzali, Lars Mueller, Irvin Teh, Arka Das, Erica Dall'Armellina, Filip Szczepankiewicz, Derek K Jones, Jurgen E Schneider","doi":"10.1016/j.media.2024.103386","DOIUrl":"https://doi.org/10.1016/j.media.2024.103386","url":null,"abstract":"<p><p>Cardiac diffusion tensor imaging (cDTI) is highly prone to image corruption, yet robust-fitting methods are rarely used. Single voxel outlier detection (SVOD) can overlook corruptions that are visually obvious, perhaps causing reluctance to replace whole-image shot-rejection (SR) despite its own deficiencies. SVOD's deficiencies may be relatively unimportant: corrupted signals that are not statistical outliers may not be detrimental. Multiple voxel outlier detection (MVOD), using a local myocardial neighbourhood, may overcome the shared deficiencies of SR and SVOD for cDTI while keeping the benefits of both. Here, robust fitting methods using M-estimators are derived for both non-linear least squares and weighted least squares fitting, and outlier detection is applied using (i) SVOD; and (ii) SVOD and MVOD. These methods, along with non-robust fitting with/without SR, are applied to cDTI datasets from healthy volunteers and hypertrophic cardiomyopathy patients. Robust fitting methods produce larger group differences with more statistical significance for MD, FA, and E2A, versus non-robust methods, with MVOD giving the largest group differences for MD and FA. Visual analysis demonstrates the superiority of robust-fitting methods over SR, especially when it is difficult to partition the images into good and bad sets. Synthetic experiments confirm that MVOD gives lower root-mean-square-error than SVOD.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103386"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142818560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-supervised graph contrastive learning with diffusion augmentation for functional MRI analysis and brain disorder detection.","authors":"Xiaochuan Wang, Yuqi Fang, Qianqian Wang, Pew-Thian Yap, Hongtu Zhu, Mingxia Liu","doi":"10.1016/j.media.2024.103403","DOIUrl":"10.1016/j.media.2024.103403","url":null,"abstract":"<p><p>Resting-state functional magnetic resonance imaging (rs-fMRI) provides a non-invasive imaging technique to study patterns of brain activity, and is increasingly used to facilitate automated brain disorder analysis. Existing fMRI-based learning methods often rely on labeled data to construct learning models, while the data annotation process typically requires significant time and resource investment. Graph contrastive learning offers a promising solution to address the small labeled data issue, by augmenting fMRI time series for self-supervised learning. However, data augmentation strategies employed in these approaches may damage the original blood-oxygen-level-dependent (BOLD) signals, thus hindering subsequent fMRI feature extraction. In this paper, we propose a self-supervised graph contrastive learning framework with diffusion augmentation (GCDA) for functional MRI analysis. The GCDA consists of a pretext model and a task-specific model. In the pretext model, we first augment each brain functional connectivity network derived from fMRI through a graph diffusion augmentation (GDA) module, and then use two graph isomorphism networks with shared parameters to extract features in a self-supervised contrastive learning manner. The pretext model can be optimized without the need for labeled training data, while the GDA focuses on perturbing graph edges and nodes, thus preserving the integrity of original BOLD signals. The task-specific model involves fine-tuning the trained pretext model to adapt to downstream tasks. Experimental results on two rs-fMRI cohorts with a total of 1230 subjects demonstrate the effectiveness of our method compared with several state-of-the-arts.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103403"},"PeriodicalIF":10.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"COLLATOR: Consistent spatial–temporal longitudinal atlas construction via implicit neural representation","authors":"Lixuan Chen, Xuanyu Tian, Jiangjie Wu, Guoyan Lao, Yuyao Zhang, Hongjiang Wei","doi":"10.1016/j.media.2024.103396","DOIUrl":"https://doi.org/10.1016/j.media.2024.103396","url":null,"abstract":"Longitudinal brain atlases that present brain development trend along time, are essential tools for brain development studies. However, conventional methods construct these atlases by independently averaging brain images from different individuals at discrete time points. This approach could introduce temporal inconsistencies due to variations in ontogenetic trends among samples, potentially affecting accuracy of brain developmental characteristic analysis. In this paper, we propose an implicit neural representation (INR)-based framework to improve the temporal consistency in longitudinal atlases. We treat temporal inconsistency as a 4-dimensional (4D) image denoising task, where the data consists of 3D spatial information and 1D temporal progression. We formulate the longitudinal atlas as an implicit function of the spatial–temporal coordinates, allowing structural inconsistency over the time to be considered as 3D image noise along age. Inspired by recent self-supervised denoising methods (e.g. Noise2Noise), our approach learns the noise-free and temporally continuous implicit function from inconsistent longitudinal atlas data. Finally, the time-consistent longitudinal brain atlas can be reconstructed by evaluating the denoised 4D INR function at critical brain developing time points. We evaluate our approach on three longitudinal brain atlases of different MRI modalities, demonstrating that our method significantly improves temporal consistency while accurately preserving brain structures. Additionally, the continuous functions generated by our method enable the creation of 4D atlases with higher spatial and temporal resolution. Code: <ce:inter-ref xlink:href=\"https://github.com/maopaom/COLLATOR\" xlink:type=\"simple\">https://github.com/maopaom/COLLATOR</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"1 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ZygoPlanner: A three-stage graphics-based framework for optimal preoperative planning of zygomatic implant placement.","authors":"Haitao Li, Xingqi Fan, Baoxin Tao, Wenying Wang, Yiqun Wu, Xiaojun Chen","doi":"10.1016/j.media.2024.103401","DOIUrl":"https://doi.org/10.1016/j.media.2024.103401","url":null,"abstract":"<p><p>Zygomatic implant surgery is an essential treatment option of oral rehabilitation for patients with severe maxillary defect, and preoperative planning is an important approach to enhance the surgical outcomes. However, the current planning still heavily relies on manual interventions, which is labor-intensive, experience-dependent, and poorly reproducible. Therefore, we propose ZygoPlanner, a pioneering efficient preoperative planning framework for zygomatic implantation, which may be the first solution that seamlessly involves the positioning of zygomatic bones, the generation of alternative paths, and the computation of optimal implantation paths. To efficiently achieve robust planning, we developed a graphics-based interpretable method for zygomatic bone positioning leveraging the shape prior knowledge. Meanwhile, a surface-faithful point cloud filling algorithm that works for concave geometries was proposed to populate dense points within the zygomatic bones, facilitating generation of alternative paths. Finally, we innovatively realized a graphical representation of the medical bone-to-implant contact to obtain the optimal results under multiple constraints. Clinical experiments confirmed the superiority of our framework across different scenarios. The source code is available at https://github.com/Haitao-Lee/auto_zygomatic_implantation.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103401"},"PeriodicalIF":10.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142818562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvain Thibeault , Marjolaine Roy-Beaudry , Stefan Parent , Samuel Kadoury
{"title":"Prediction of the upright articulated spine shape in the operating room using conditioned neural kernel fields","authors":"Sylvain Thibeault , Marjolaine Roy-Beaudry , Stefan Parent , Samuel Kadoury","doi":"10.1016/j.media.2024.103400","DOIUrl":"10.1016/j.media.2024.103400","url":null,"abstract":"<div><div>Anterior vertebral tethering (AVT) is a non-invasive spine surgery technique, treating severe spine deformations and preserving lower back mobility. However, patient positioning and surgical strategies greatly influences postoperative results. Predicting the upright geometry from pediatric spines is needed to optimize patient positioning in the operating room (OR) and improve surgical outcomes, but remains a complex task due to immature bone properties. We propose a framework used in the OR predicting the upright spine geometry at the first visit following surgery in idiopathic scoliosis patients. The approach first creates a 3D model of the spine while the patient is on the operating table. For this, multiview Transformers that combine images from different viewpoints are used to generate the intraoperative pose. The postoperative upright shape is then predicted on-the-fly using implicit neural fields, which are trained from geometries at different time points and conditioned with surgical parameters. A Signed Distance Function for shape constellations is used to handle the variability in spine appearance, capturing a disentangled latent domain of the articulation vectors, with separate encoding vectors representing both articulation and shape parameters. A regularization criterion based on a pre-trained group-wise trajectory of spine transformations generates complete spine models. A training set of 652 patients with 3D models was used to train the model, tested on a distinct cohort of 83 surgical patients. The framework based on neural kernels predicted upright 3D geometries with a mean 3D error of <span><math><mrow><mn>1</mn><mo>.</mo><mn>3</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>5</mn><mspace></mspace><mi>mm</mi></mrow></math></span> in landmarks points, and IoU of 95.9% in vertebral shapes when compared to actual postop models, falling within the acceptable margins of error below 2 mm.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103400"},"PeriodicalIF":10.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142756929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luhao Sun , Bowen Han , Wenzong Jiang , Weifeng Liu , Baodi Liu , Dapeng Tao , Zhiyong Yu , Chao Li
{"title":"Multi-scale region selection network in deep features for full-field mammogram classification","authors":"Luhao Sun , Bowen Han , Wenzong Jiang , Weifeng Liu , Baodi Liu , Dapeng Tao , Zhiyong Yu , Chao Li","doi":"10.1016/j.media.2024.103399","DOIUrl":"10.1016/j.media.2024.103399","url":null,"abstract":"<div><div>Early diagnosis and treatment of breast cancer can effectively reduce mortality. Since mammogram is one of the most commonly used methods in the early diagnosis of breast cancer, the classification of mammogram images is an important work of computer-aided diagnosis (CAD) systems. With the development of deep learning in CAD, deep convolutional neural networks have been shown to have the ability to complete the classification of breast cancer tumor patches with high quality, which makes most previous CNN-based full-field mammography classification methods rely on region of interest (ROI) or segmentation annotation to enable the model to locate and focus on small tumor regions. However, the dependence on ROI greatly limits the development of CAD, because obtaining a large number of reliable ROI annotations is expensive and difficult. Some full-field mammography image classification algorithms use multi-stage training or multi-feature extractors to get rid of the dependence on ROI, which increases the computational amount of the model and feature redundancy. In order to reduce the cost of model training and make full use of the feature extraction capability of CNN, we propose a deep multi-scale region selection network (MRSN) in deep features for end-to-end training to classify full-field mammography without ROI or segmentation annotation. Inspired by the idea of multi-example learning and the patch classifier, MRSN filters the feature information and saves only the feature information of the tumor region to make the performance of the full-field image classifier closer to the patch classifier. MRSN first scores different regions under different dimensions to obtain the location information of tumor regions. Then, a few high-scoring regions are selected by location information as feature representations of the entire image, allowing the model to focus on the tumor region. Experiments on two public datasets and one private dataset prove that the proposed MRSN achieves the most advanced performance.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103399"},"PeriodicalIF":10.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liming Zhong , Ruolin Xiao , Hai Shu , Kaiyi Zheng , Xinming Li , Yuankui Wu , Jianhua Ma , Qianjin Feng , Wei Yang
{"title":"NCCT-to-CECT synthesis with contrast-enhanced knowledge and anatomical perception for multi-organ segmentation in non-contrast CT images","authors":"Liming Zhong , Ruolin Xiao , Hai Shu , Kaiyi Zheng , Xinming Li , Yuankui Wu , Jianhua Ma , Qianjin Feng , Wei Yang","doi":"10.1016/j.media.2024.103397","DOIUrl":"10.1016/j.media.2024.103397","url":null,"abstract":"<div><div>Contrast-enhanced computed tomography (CECT) is constantly used for delineating organs-at-risk (OARs) in radiation therapy planning. The delineated OARs are needed to transfer from CECT to non-contrast CT (NCCT) for dose calculation. Yet, the use of iodinated contrast agents (CA) in CECT and the dose calculation errors caused by the spatial misalignment between NCCT and CECT images pose risks of adverse side effects. A promising solution is synthesizing CECT images from NCCT scans, which can improve the visibility of organs and abnormalities for more effective multi-organ segmentation in NCCT images. However, existing methods neglect the difference between tissues induced by CA and lack the ability to synthesize the details of organ edges and blood vessels. To address these issues, we propose a contrast-enhanced knowledge and anatomical perception network (CKAP-Net) for NCCT-to-CECT synthesis. CKAP-Net leverages a contrast-enhanced knowledge learning network to capture both similarities and dissimilarities in domain characteristics attributable to CA. Specifically, a CA-based perceptual loss function is introduced to enhance the synthesis of CA details. Furthermore, we design a multi-scale anatomical perception transformer that utilizes multi-scale anatomical information from NCCT images, enabling the precise synthesis of tissue details. Our CKAP-Net is evaluated on a multi-center abdominal NCCT-CECT dataset, a head an neck NCCT-CECT dataset, and an NCMRI-CEMRI dataset. It achieves a MAE of 25.96 ± 2.64, a SSIM of 0.855 ± 0.017, and a PSNR of 32.60 ± 0.02 for CECT synthesis, and a DSC of 81.21 ± 4.44 for segmentation on the internal dataset. Extensive experiments demonstrate that CKAP-Net outperforms state-of-the-art CA synthesis methods and has better generalizability across different datasets.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103397"},"PeriodicalIF":10.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}