Medical image analysis最新文献

筛选
英文 中文
Abductive multi-instance multi-label learning for periodontal disease classification with prior domain knowledge 基于先验领域知识的牙周病分类的溯因多实例多标签学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-07 DOI: 10.1016/j.media.2024.103452
Zi-Yuan Wu , Wei Guo , Wei Zhou , Han-Jia Ye , Yuan Jiang , Houxuan Li , Zhi-Hua Zhou
{"title":"Abductive multi-instance multi-label learning for periodontal disease classification with prior domain knowledge","authors":"Zi-Yuan Wu ,&nbsp;Wei Guo ,&nbsp;Wei Zhou ,&nbsp;Han-Jia Ye ,&nbsp;Yuan Jiang ,&nbsp;Houxuan Li ,&nbsp;Zhi-Hua Zhou","doi":"10.1016/j.media.2024.103452","DOIUrl":"10.1016/j.media.2024.103452","url":null,"abstract":"<div><div>Machine learning is widely used in dentistry nowadays, offering efficient solutions for diagnosing dental diseases, such as periodontitis and gingivitis. Most existing methods for diagnosing periodontal diseases follow a two-stage process. Initially, they detect and classify potential Regions of Interest (ROIs) and subsequently determine the labels of the whole images. However, unlike the recognition of natural images, the diagnosis of periodontal diseases relies significantly on pinpointing specific affected regions, which requires professional expertise that is not fully captured by existing models. To bridge this gap, we propose a novel <strong>AB</strong>ductive <strong>M</strong>ulti-<strong>I</strong>nstance <strong>M</strong>ulti-<strong>L</strong>abel learning (<strong>AB-MIML</strong>) approach. In our approach, we treat entire intraoral images as “bags” and local patches as “instances”. By improving current multi-instance multi-label methods, AB-MIML seeks to establish a comprehensive many-to-many relationship to model the intricate correspondence among images, patches, and corresponding labels. Moreover, to harness the power of prior domain knowledge, AB-MIML converts the expertise of doctors and the structural information of images into a knowledge base and performs abductive reasoning to assist the classification and diagnosis process. Experiments unequivocally confirm the superior performance of our proposed method in diagnosing periodontal diseases compared to state-of-the-art approaches across various metrics. Moreover, our method proves invaluable in identifying critical areas correlated with the diagnosis process, aligning closely with determinations made by human doctors.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103452"},"PeriodicalIF":10.7,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142990544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised medical image segmentation via weak-to-strong perturbation consistency and edge-aware contrastive representation 基于弱-强扰动一致性和边缘感知对比表示的半监督医学图像分割
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-06 DOI: 10.1016/j.media.2024.103450
Yang Yang , Guoying Sun , Tong Zhang , Ruixuan Wang , Jingyong Su
{"title":"Semi-supervised medical image segmentation via weak-to-strong perturbation consistency and edge-aware contrastive representation","authors":"Yang Yang ,&nbsp;Guoying Sun ,&nbsp;Tong Zhang ,&nbsp;Ruixuan Wang ,&nbsp;Jingyong Su","doi":"10.1016/j.media.2024.103450","DOIUrl":"10.1016/j.media.2024.103450","url":null,"abstract":"<div><div>Despite that supervised learning has demonstrated impressive accuracy in medical image segmentation, its reliance on large labeled datasets poses a challenge due to the effort and expertise required for data acquisition. Semi-supervised learning has emerged as a potential solution. However, it tends to yield satisfactory segmentation performance in the central region of the foreground, but struggles in the edge region. In this paper, we propose an innovative framework that effectively leverages unlabeled data to improve segmentation performance, especially in edge regions. Our proposed framework includes two novel designs. Firstly, we introduce a weak-to-strong perturbation strategy with corresponding feature-perturbed consistency loss to efficiently utilize unlabeled data and guide our framework in learning reliable regions. Secondly, we propose an edge-aware contrastive loss that utilizes uncertainty to select positive pairs, thereby learning discriminative pixel-level features in the edge regions using unlabeled data. In this way, the model minimizes the discrepancy of multiple predictions and improves representation ability, ultimately aiming at impressive performance on both primary and edge regions. We conducted a comparative analysis of the segmentation results on the publicly available BraTS2020 dataset, LA dataset, and the 2017 ACDC dataset. Through extensive quantification and visualization experiments under three standard semi-supervised settings, we demonstrate the effectiveness of our approach and set a new state-of-the-art for semi-supervised medical image segmentation. Our code is released publicly at <span><span>https://github.com/youngyzzZ/SSL-w2sPC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103450"},"PeriodicalIF":10.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-specific information preservation for Alzheimer’s disease diagnosis with incomplete multi-modality neuroimages 不完全多模态神经图像在阿尔茨海默病诊断中的领域特异性信息保存
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-06 DOI: 10.1016/j.media.2024.103448
Haozhe Xu , Jian Wang , Qianjin Feng , Yu Zhang , Zhenyuan Ning
{"title":"Domain-specific information preservation for Alzheimer’s disease diagnosis with incomplete multi-modality neuroimages","authors":"Haozhe Xu ,&nbsp;Jian Wang ,&nbsp;Qianjin Feng ,&nbsp;Yu Zhang ,&nbsp;Zhenyuan Ning","doi":"10.1016/j.media.2024.103448","DOIUrl":"10.1016/j.media.2024.103448","url":null,"abstract":"<div><div>Although multi-modality neuroimages have advanced the early diagnosis of Alzheimer’s Disease (AD), missing modality issue still poses a unique challenge in the clinical practice. Recent studies have tried to impute the missing data so as to utilize all available subjects for training robust multi-modality models. However, these studies may overlook the modality-specific information inherent in multi-modality data, that is, different modalities possess distinct imaging characteristics and focus on different aspects of the disease. In this paper, we propose a domain-specific information preservation (DSIP) framework, consisting of modality imputation stage and status identification stage, for AD diagnosis with incomplete multi-modality neuroimages. In the first stage, a specificity-induced generative adversarial network (SIGAN) is developed to bridge the modality gap and capture modality-specific details for imputing high-quality neuroimages. In the second stage, a specificity-promoted diagnosis network (SPDN) is designed to promote the inter-modality feature interaction and the classifier robustness for identifying disease status accurately. Extensive experiments demonstrate the proposed method significantly outperforms state-of-the-art methods in both modality imputation and status identification tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103448"},"PeriodicalIF":10.7,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142968133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated ultrasonography of hepatocellular carcinoma using discrete wavelet transform based deep-learning neural network 基于离散小波变换的深度学习神经网络在肝癌超声诊断中的应用。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-04 DOI: 10.1016/j.media.2025.103453
Se-Yeol Rhyou, Jae-Chern Yoo
{"title":"Automated ultrasonography of hepatocellular carcinoma using discrete wavelet transform based deep-learning neural network","authors":"Se-Yeol Rhyou,&nbsp;Jae-Chern Yoo","doi":"10.1016/j.media.2025.103453","DOIUrl":"10.1016/j.media.2025.103453","url":null,"abstract":"<div><div>This study introduces HCC-Net, a novel wavelet-based approach for the accurate diagnosis of hepatocellular carcinoma (HCC) from abdominal ultrasound (US) images using artificial neural networks. The HCC-Net integrates the discrete wavelet transform (DWT) to decompose US images into four sub-band images, a lesion detector for hierarchical lesion localization, and a pattern-augmented classifier for generating pattern-enhanced lesion images and subsequent classification. The lesion detection uses a hierarchical coarse-to-fine approach to minimize missed lesions. CoarseNet performs initial lesion localization, while FineNet identifies any lesions that were missed. In the classification phase, the wavelet components of detected lesions are synthesized to create pattern-augmented images that enhance feature distinction, resulting in highly accurate classifications. These augmented images are classified into 'Normal,' 'Benign,' or 'Malignant' categories according to their morphologic features on sonography. The experimental results demonstrate the significant effectiveness of the proposed coarse-to-fine detection framework and pattern-augmented classifier in lesion detection and classification. We achieved an accuracy of 96.2 %, a sensitivity of 97.6 %, and a specificity of 98.1 % on the Samsung Medical Center dataset, indicating HCC-Net's potential as a reliable tool for liver cancer screening.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103453"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review 从x线片生成合成计算机层析成像的策略:范围综述。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-04 DOI: 10.1016/j.media.2025.103454
Daniel De Wilde , Olivier Zanier , Raffaele Da Mutten , Michael Jin , Luca Regli , Carlo Serra , Victor E. Staartjes
{"title":"Strategies for generating synthetic computed tomography-like imaging from radiographs: A scoping review","authors":"Daniel De Wilde ,&nbsp;Olivier Zanier ,&nbsp;Raffaele Da Mutten ,&nbsp;Michael Jin ,&nbsp;Luca Regli ,&nbsp;Carlo Serra ,&nbsp;Victor E. Staartjes","doi":"10.1016/j.media.2025.103454","DOIUrl":"10.1016/j.media.2025.103454","url":null,"abstract":"<div><h3>Background</h3><div>Advancements in tomographic medical imaging have revolutionized diagnostics and treatment monitoring by offering detailed 3D visualization of internal structures. Despite the significant value of computed tomography (CT), challenges such as high radiation dosage and cost barriers limit its accessibility, especially in low- and middle-income countries. Recognizing the potential of radiographic imaging in reconstructing CT images, this scoping review aims to explore the emerging field of synthesizing 3D CT-like images from 2D radiographs by examining the current methodologies.</div></div><div><h3>Methods</h3><div>A scoping review was carried out following PRISMA-SR guidelines. Eligibility criteria for the articles included full-text articles published up to September 9, 2024, studying methodologies for the synthesis of 3D CT images from 2D biplanar or four-projection x-ray images. Eligible articles were sourced from PubMed MEDLINE, Embase, and arXiv.</div></div><div><h3>Results</h3><div>76 studies were included. The majority (50.8 %, <em>n</em> = 30) were published between 2010 and 2020 (38.2 %, <em>n</em> = 29) and from 2020 onwards (36.8 %, <em>n</em> = 28), with European (40.8 %, <em>n</em> = 31), North American (26.3 %, <em>n</em> = 20), and Asian (32.9 %, <em>n</em> = 25) institutions being primary contributors. Anatomical regions varied, with 17.1 % (<em>n</em> = 13) of studies not using clinical data. Further, studies focused on the chest (25 %, <em>n</em> = 19), spine and vertebrae (17.1 %, <em>n</em> = 13), coronary arteries (10.5 %, <em>n</em> = 8), and cranial structures (10.5 %, <em>n</em> = 8), among other anatomical regions. Convolutional neural networks (CNN) (19.7 %, <em>n</em> = 15), generative adversarial networks (21.1 %, <em>n</em> = 16) and statistical shape models (15.8 %, <em>n</em> = 12) emerged as the most applied methodologies. A limited number of studies included explored the use of conditional diffusion models, iterative reconstruction algorithms, statistical shape models, and digital tomosynthesis.</div></div><div><h3>Conclusion</h3><div>This scoping review summarizes current strategies and challenges in synthetic imaging generation. The development of 3D CT-like imaging from 2D radiographs could reduce radiation risk while simultaneously addressing financial and logistical obstacles that impede global access to CT imaging. Despite initial promising results, the field encounters challenges with varied methodologies and frequent lack of proper validation, requiring further research to define synthetic imaging's clinical role.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103454"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the diagnostic potential of electrocardiograms through information transfer from cardiac magnetic resonance imaging 通过心脏磁共振成像的信息传递,释放心电图的诊断潜力。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-04 DOI: 10.1016/j.media.2024.103451
Özgün Turgut , Philip Müller , Paul Hager , Suprosanna Shit , Sophie Starck , Martin J. Menten , Eimo Martens , Daniel Rueckert
{"title":"Unlocking the diagnostic potential of electrocardiograms through information transfer from cardiac magnetic resonance imaging","authors":"Özgün Turgut ,&nbsp;Philip Müller ,&nbsp;Paul Hager ,&nbsp;Suprosanna Shit ,&nbsp;Sophie Starck ,&nbsp;Martin J. Menten ,&nbsp;Eimo Martens ,&nbsp;Daniel Rueckert","doi":"10.1016/j.media.2024.103451","DOIUrl":"10.1016/j.media.2024.103451","url":null,"abstract":"<div><div>Cardiovascular diseases (CVD) can be diagnosed using various diagnostic modalities. The electrocardiogram (ECG) is a cost-effective and widely available diagnostic aid that provides functional information of the heart. However, its ability to classify and spatially localise CVD is limited. In contrast, cardiac magnetic resonance (CMR) imaging provides detailed structural information of the heart and thus enables evidence-based diagnosis of CVD, but long scan times and high costs limit its use in clinical routine. In this work, we present a deep learning strategy for cost-effective and comprehensive cardiac screening solely from ECG. Our approach combines multimodal contrastive learning with masked data modelling to transfer domain-specific information from CMR imaging to ECG representations. In extensive experiments using data from 40,044 UK Biobank subjects, we demonstrate the utility and generalisability of our method for subject-specific risk prediction of CVD and the prediction of cardiac phenotypes using only ECG data. Specifically, our novel multimodal pre-training paradigm improves performance by up to 12.19<span><math><mspace></mspace></math></span>% for risk prediction and 27.59<span><math><mspace></mspace></math></span>% for phenotype prediction. In a qualitative analysis, we demonstrate that our learned ECG representations incorporate information from CMR image regions of interest. Our entire pipeline is publicly available at <span><span>https://github.com/oetu/MMCL-ECG-CMR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103451"},"PeriodicalIF":10.7,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142964590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology 自交互学习:计算病理学中分子性状预测的多尺度组织形态学特征的融合和进化。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-03 DOI: 10.1016/j.media.2024.103437
Yang Hu , Korsuk Sirinukunwattana , Bin Li , Kezia Gaitskell , Enric Domingo , Willem Bonnaffé , Marta Wojciechowska , Ruby Wood , Nasullah Khalid Alham , Stefano Malacrino , Dan J Woodcock , Clare Verrill , Ahmed Ahmed , Jens Rittscher
{"title":"Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology","authors":"Yang Hu ,&nbsp;Korsuk Sirinukunwattana ,&nbsp;Bin Li ,&nbsp;Kezia Gaitskell ,&nbsp;Enric Domingo ,&nbsp;Willem Bonnaffé ,&nbsp;Marta Wojciechowska ,&nbsp;Ruby Wood ,&nbsp;Nasullah Khalid Alham ,&nbsp;Stefano Malacrino ,&nbsp;Dan J Woodcock ,&nbsp;Clare Verrill ,&nbsp;Ahmed Ahmed ,&nbsp;Jens Rittscher","doi":"10.1016/j.media.2024.103437","DOIUrl":"10.1016/j.media.2024.103437","url":null,"abstract":"<div><div>Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales. This work proposes a novel multiple-instance learning (MIL) framework capable of WSI-based cancer morpho-molecular subtyping by fusion of different-scale features. Our method, debuting as Inter-MIL, follows a weakly-supervised scheme. It enables the training of the patch-level encoder for WSI in a task-aware optimisation procedure, a step normally not modelled in most existing MIL-based WSI analysis frameworks. We demonstrate that optimising the patch-level encoder is crucial to achieving high-quality fine-grained and tissue-level subtyping results and offers a significant improvement over task-agnostic encoders. Our approach deploys a pseudo-label propagation strategy to update the patch encoder iteratively, allowing discriminative subtype features to be learned. This mechanism also empowers extracting fine-grained attention within image tiles (the small patches), a task largely ignored in most existing weakly supervised-based frameworks. With Inter-MIL, we carried out four challenging cancer molecular subtyping tasks in the context of ovarian, colorectal, lung, and breast cancer. Extensive evaluation results show that Inter-MIL is a robust framework for cancer morpho-molecular subtyping with superior performance compared to several recently proposed methods, in small dataset scenarios where the number of available training slides is less than 100. The iterative optimisation mechanism of Inter-MIL significantly improves the quality of the image features learned by the patch embedded and generally directs the attention map to areas that better align with experts’ interpretation, leading to the identification of more reliable histopathology biomarkers. Moreover, an external validation cohort is used to verify the robustness of Inter-MIL on molecular trait prediction.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103437"},"PeriodicalIF":10.7,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142971612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma SegRap2023:鼻咽癌放疗计划中危险器官和肿瘤体积分割的基准
IF 10.7 1区 医学
Medical image analysis Pub Date : 2025-01-02 DOI: 10.1016/j.media.2024.103447
Xiangde Luo , Jia Fu , Yunxin Zhong , Shuolin Liu , Bing Han , Mehdi Astaraki , Simone Bendazzoli , Iuliana Toma-Dasu , Yiwen Ye , Ziyang Chen , Yong Xia , Yanzhou Su , Jin Ye , Junjun He , Zhaohu Xing , Hongqiu Wang , Lei Zhu , Kaixiang Yang , Xin Fang , Zhiwei Wang , Shaoting Zhang
{"title":"SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma","authors":"Xiangde Luo ,&nbsp;Jia Fu ,&nbsp;Yunxin Zhong ,&nbsp;Shuolin Liu ,&nbsp;Bing Han ,&nbsp;Mehdi Astaraki ,&nbsp;Simone Bendazzoli ,&nbsp;Iuliana Toma-Dasu ,&nbsp;Yiwen Ye ,&nbsp;Ziyang Chen ,&nbsp;Yong Xia ,&nbsp;Yanzhou Su ,&nbsp;Jin Ye ,&nbsp;Junjun He ,&nbsp;Zhaohu Xing ,&nbsp;Hongqiu Wang ,&nbsp;Lei Zhu ,&nbsp;Kaixiang Yang ,&nbsp;Xin Fang ,&nbsp;Zhiwei Wang ,&nbsp;Shaoting Zhang","doi":"10.1016/j.media.2024.103447","DOIUrl":"10.1016/j.media.2024.103447","url":null,"abstract":"<div><div>Radiation therapy is a primary and effective treatment strategy for NasoPharyngeal Carcinoma (NPC). The precise delineation of Gross Tumor Volumes (GTVs) and Organs-At-Risk (OARs) is crucial in radiation treatment, directly impacting patient prognosis. Despite that deep learning has achieved remarkable performance on various medical image segmentation tasks, its performance on OARs and GTVs of NPC is still limited, and high-quality benchmark datasets on this task are highly desirable for model development and evaluation. To alleviate this problem, the SegRap2023 challenge was organized in conjunction with MICCAI2023 and presented a large-scale benchmark for OAR and GTV segmentation with 400 Computed Tomography (CT) scans from 200 NPC patients, each with a pair of pre-aligned non-contrast and contrast-enhanced CT scans. The challenge aimed to segment 45 OARs and 2 GTVs from the paired CT scans per patient, and received 10 and 11 complete submissions for the two tasks, respectively. In this paper, we detail the challenge and analyze the solutions of all participants. The average Dice similarity coefficient scores for all submissions ranged from 76.68% to 86.70%, and 70.42% to 73.44% for OARs and GTVs, respectively. We conclude that the segmentation of relatively large OARs is well-addressed, and more efforts are needed for GTVs and small or thin OARs. The benchmark remains available at: <span><span>https://segrap2023.grand-challenge.org</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103447"},"PeriodicalIF":10.7,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IVIM-Morph: Motion-compensated quantitative Intra-voxel Incoherent Motion (IVIM) analysis for functional fetal lung maturity assessment from diffusion-weighted MRI data IVIM- morph:运动补偿定量体素内非相干运动(IVIM)分析功能胎儿肺成熟度评估从扩散加权MRI数据
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-31 DOI: 10.1016/j.media.2024.103445
Noga Kertes , Yael Zaffrani-Reznikov , Onur Afacan , Sila Kurugol , Simon K. Warfield , Moti Freiman
{"title":"IVIM-Morph: Motion-compensated quantitative Intra-voxel Incoherent Motion (IVIM) analysis for functional fetal lung maturity assessment from diffusion-weighted MRI data","authors":"Noga Kertes ,&nbsp;Yael Zaffrani-Reznikov ,&nbsp;Onur Afacan ,&nbsp;Sila Kurugol ,&nbsp;Simon K. Warfield ,&nbsp;Moti Freiman","doi":"10.1016/j.media.2024.103445","DOIUrl":"10.1016/j.media.2024.103445","url":null,"abstract":"<div><div>Quantitative analysis of pseudo-diffusion in diffusion-weighted magnetic resonance imaging (DWI) data shows potential for assessing fetal lung maturation and generating valuable imaging biomarkers. Yet, the clinical utility of DWI data is hindered by unavoidable fetal motion during acquisition. We present IVIM-morph, a self-supervised deep neural network model for motion-corrected quantitative analysis of DWI data using the Intra-voxel Incoherent Motion (IVIM) model. IVIM-morph combines two sub-networks, a registration sub-network, and an IVIM model fitting sub-network, enabling simultaneous estimation of IVIM model parameters and motion. To promote physically plausible image registration, we introduce a biophysically informed loss function that effectively balances registration and model-fitting quality. We validated the efficacy of IVIM-morph by establishing a correlation between the predicted IVIM model parameters of the lung and gestational age (GA) using fetal DWI data of 39 subjects. Our approach was compared against six baseline methods: (1) no motion compensation, (2) affine registration of all DWI images to the initial image, (3) deformable registration of all DWI images to the initial image, (4) deformable registration of each DWI image to its preceding image in the sequence, (5) iterative deformable motion compensation combined with IVIM model parameter estimation, and (6) self-supervised deep-learning-based deformable registration. IVIM-morph exhibited a notably improved correlation with gestational age (GA) when performing in-vivo quantitative analysis of fetal lung DWI data during the canalicular phase. Specifically, over 2 test groups of cases, it achieved an <span><math><msubsup><mrow><mi>R</mi></mrow><mrow><mi>f</mi></mrow><mrow><mn>2</mn></mrow></msubsup></math></span> of 0.44 and 0.52, outperforming the values of 0.27 and 0.25, 0.25 and 0.00, 0.00 and 0.00, 0.38 and 0.00, and 0.07 and 0.14 obtained by other methods. IVIM-morph shows potential in developing valuable biomarkers for non-invasive assessment of fetal lung maturity with DWI data. Moreover, its adaptability opens the door to potential applications in other clinical contexts where motion compensation is essential for quantitative DWI analysis. The IVIM-morph code is readily available at: <span><span>https://github.com/TechnionComputationalMRILab/qDWI-Morph</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103445"},"PeriodicalIF":10.7,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142925280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-12-30 DOI: 10.1016/j.media.2024.103440
Zhuotong Cai , Jingmin Xin , Chenyu You , Peiwen Shi , Siyuan Dong , Nicha C. Dvornek , Nanning Zheng , James S. Duncan
{"title":"Style mixup enhanced disentanglement learning for unsupervised domain adaptation in medical image segmentation","authors":"Zhuotong Cai ,&nbsp;Jingmin Xin ,&nbsp;Chenyu You ,&nbsp;Peiwen Shi ,&nbsp;Siyuan Dong ,&nbsp;Nicha C. Dvornek ,&nbsp;Nanning Zheng ,&nbsp;James S. Duncan","doi":"10.1016/j.media.2024.103440","DOIUrl":"10.1016/j.media.2024.103440","url":null,"abstract":"<div><div>Unsupervised domain adaptation (UDA) has shown impressive performance by improving the generalizability of the model to tackle the domain shift problem for cross-modality medical segmentation. However, most of the existing UDA approaches depend on high-quality image translation with diversity constraints to explicitly augment the potential data diversity, which is hard to ensure semantic consistency and capture domain-invariant representation. In this paper, free of image translation and diversity constraints, we propose a novel Style Mixup Enhanced Disentanglement Learning (SMEDL) for UDA medical image segmentation to further improve domain generalization and enhance domain-invariant learning ability. Firstly, our method adopts disentangled style mixup to implicitly generate style-mixed domains with diverse styles in the feature space through a convex combination of disentangled style factors, which can effectively improve the model generalization. Meanwhile, we further introduce pixel-wise consistency regularization to ensure the effectiveness of style-mixed domains and provide domain consistency guidance. Secondly, we introduce dual-level domain-invariant learning, including intra-domain contrastive learning and inter-domain adversarial learning to mine the underlying domain-invariant representation under both intra- and inter-domain variations. We have conducted comprehensive experiments to evaluate our method on two public cardiac datasets and one brain dataset. Experimental results demonstrate that our proposed method achieves superior performance compared to the state-of-the-art methods for UDA medical image segmentation.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103440"},"PeriodicalIF":10.7,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143101590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信