Yasna Forghani, Rafaela Timóteo, Tiago Marques, Nuno Loução, Maria João Cardoso, Fátima Cardoso, Mario Figueiredo, Pedro Gouveia, João Santinha
{"title":"Comparative analysis of nnU-Net and Auto3Dseg for fat and fibroglandular tissue segmentation in MRI.","authors":"Yasna Forghani, Rafaela Timóteo, Tiago Marques, Nuno Loução, Maria João Cardoso, Fátima Cardoso, Mario Figueiredo, Pedro Gouveia, João Santinha","doi":"10.1117/1.JMI.12.2.024005","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer, the most common cancer type among women worldwide, requires early detection and accurate diagnosis for improved treatment outcomes. Segmenting fat and fibroglandular tissue (FGT) in magnetic resonance imaging (MRI) is essential for creating volumetric models, enhancing surgical workflow, and improving clinical outcomes. Manual segmentation is time-consuming and subjective, prompting the development of automated deep-learning algorithms to perform this task. However, configuring these algorithms for 3D medical images is challenging due to variations in image features and preprocessing distortions. Automated machine learning (AutoML) frameworks automate model selection, hyperparameter tuning, and architecture optimization, offering a promising solution by reducing reliance on manual intervention and expert knowledge.</p><p><strong>Approach: </strong>We compare nnU-Net and Auto3Dseg, two AutoML frameworks, in segmenting fat and FGT on T1-weighted MRI images from the Duke breast MRI dataset (100 patients). We used threefold cross-validation, employing the Dice similarity coefficient (DSC) and Hausdorff distance (HD) metrics for evaluation. The <math><mrow><mi>F</mi></mrow> </math> -test and Tukey honestly significant difference analysis were used to assess statistical differences across methods.</p><p><strong>Results: </strong>nnU-Net achieved DSC scores of <math><mrow><mn>0.946</mn> <mo>±</mo> <mn>0.026</mn></mrow> </math> (fat) and <math><mrow><mn>0.872</mn> <mo>±</mo> <mn>0.070</mn></mrow> </math> (FGT), whereas Auto3DSeg achieved <math><mrow><mn>0.940</mn> <mo>±</mo> <mn>0.026</mn></mrow> </math> (fat) and <math><mrow><mn>0.871</mn> <mo>±</mo> <mn>0.074</mn></mrow> </math> (FGT). Significant differences in fat HD ( <math><mrow><mi>F</mi> <mo>=</mo> <mn>6.3020</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) originated from the full resolution and the 3D cascade U-Net. No evidence of significant differences was found in FGT HD or DSC metrics.</p><p><strong>Conclusions: </strong>Ensemble approaches of Auto3Dseg and nnU-Net demonstrated comparable performance in segmenting fat and FGT on breast MRI. The significant differences in fat HD underscore the importance of boundary-focused metrics in evaluating segmentation methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024005"},"PeriodicalIF":1.9000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12003052/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.12.2.024005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/16 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Breast cancer, the most common cancer type among women worldwide, requires early detection and accurate diagnosis for improved treatment outcomes. Segmenting fat and fibroglandular tissue (FGT) in magnetic resonance imaging (MRI) is essential for creating volumetric models, enhancing surgical workflow, and improving clinical outcomes. Manual segmentation is time-consuming and subjective, prompting the development of automated deep-learning algorithms to perform this task. However, configuring these algorithms for 3D medical images is challenging due to variations in image features and preprocessing distortions. Automated machine learning (AutoML) frameworks automate model selection, hyperparameter tuning, and architecture optimization, offering a promising solution by reducing reliance on manual intervention and expert knowledge.
Approach: We compare nnU-Net and Auto3Dseg, two AutoML frameworks, in segmenting fat and FGT on T1-weighted MRI images from the Duke breast MRI dataset (100 patients). We used threefold cross-validation, employing the Dice similarity coefficient (DSC) and Hausdorff distance (HD) metrics for evaluation. The -test and Tukey honestly significant difference analysis were used to assess statistical differences across methods.
Results: nnU-Net achieved DSC scores of (fat) and (FGT), whereas Auto3DSeg achieved (fat) and (FGT). Significant differences in fat HD ( , ) originated from the full resolution and the 3D cascade U-Net. No evidence of significant differences was found in FGT HD or DSC metrics.
Conclusions: Ensemble approaches of Auto3Dseg and nnU-Net demonstrated comparable performance in segmenting fat and FGT on breast MRI. The significant differences in fat HD underscore the importance of boundary-focused metrics in evaluating segmentation methods.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.