Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings最新文献

筛选
英文 中文
Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy. nnUNet与MedNeXt在mri引导放疗中头颈部肿瘤分割的比较分析。
Nikoo Moradi, André Ferreira, Behrus Puladi, Jens Kleesiek, Emad Fatemizadeh, Gijs Luijten, Victor Alves, Jan Egger
{"title":"Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-Guided Radiotherapy.","authors":"Nikoo Moradi, André Ferreira, Behrus Puladi, Jens Kleesiek, Emad Fatemizadeh, Gijs Luijten, Victor Alves, Jan Egger","doi":"10.1007/978-3-031-83274-1_10","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_10","url":null,"abstract":"<p><p>Radiation therapy (RT) is essential in treating head and neck cancer (HNC), with magnetic resonance imaging (MRI)-guided RT offering superior soft tissue contrast and functional imaging. However, manual tumor segmentation is time-consuming and complex, and therefore remains a challenge. In this study, we present our solution as team TUMOR to the HNTS-MRG24 MICCAI Challenge which is focused on automated segmentation of primary gross tumor volumes (GTVp) and metastatic lymph node gross tumor volume (GTVn) in pre-RT and mid-RT MRI images. We utilized the HNTS-MRG2024 dataset, which consists of 150 MRI scans from patients diagnosed with HNC, including original and registered pre-RT and mid-RT T2-weighted images with corresponding segmentation masks for GTVp and GTVn. We employed two state-of-the-art models in deep learning, nnUNet and MedNeXt. For Task 1, we pretrained models on pre-RT registered and mid-RT images, followed by fine-tuning on original pre-RT images. For Task 2, we combined registered pre-RT images, registered pre-RT segmentation masks, and mid-RT data as a multi-channel input for training. Our solution for <b>Task 1</b> achieved 1st place in the final test phase with an aggregated Dice Similarity Coefficient of <b>0.8254</b>, and our solution for <b>Task 2</b> ranked 8th with a score of <b>0.7005</b>. The proposed solution is publicly available at Github Repository.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"136-153"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11982674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMamba Adjustment: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and NnU-Net ResEnc Planner. UMamba调整:使用UMamba和NnU-Net ResEnc Planner在mri引导的RT中推进头颈部肿瘤的GTV分割。
Jintao Ren, Kim Hochreuter, Jesper Folsted Kallehauge, Stine Sofia Korreman
{"title":"UMamba Adjustment: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and NnU-Net ResEnc Planner.","authors":"Jintao Ren, Kim Hochreuter, Jesper Folsted Kallehauge, Stine Sofia Korreman","doi":"10.1007/978-3-031-83274-1_9","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_9","url":null,"abstract":"<p><p>Magnetic Resonance Imaging (MRI) plays a crucial role in MRI-guided adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast. However, accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn), remains challenging. Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual Encoder (ResEnc), which enhances feature extraction through multistage residual blocks. In this study, we integrate these strengths into a novel approach, termed 'UMambaAdj'. Our proposed method was evaluated on the HNTS-MRG 2024 challenge test set using pre-RT T2-weighted MRI images, achieving an aggregated Dice Similarity Coefficient ( <math><mi>D</mi> <mi>S</mi> <msub><mrow><mi>C</mi></mrow> <mrow><mi>a</mi> <mi>g</mi> <mi>g</mi></mrow> </msub> </math> ) of 0.751 for GTVp and 0.842 for GTVn, with a mean <math><mi>D</mi> <mi>S</mi> <msub><mrow><mi>C</mi></mrow> <mrow><mi>a</mi> <mi>g</mi> <mi>g</mi></mrow> </msub> </math> of 0.796. This approach demonstrates potential for more precise tumor delineation in MRI-guided adaptive radiotherapy, ultimately improving treatment outcomes for HNC patients. Team: DCPT-Stine's group.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"123-135"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11997962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head and Neck Tumor Segmentation of MRI from Pre- and Mid-Radiotherapy with Pre-Training, Data Augmentation and Dual Flow UNet. 基于预训练、数据增强和双流UNet的放疗前后MRI头颈部肿瘤分割。
Litingyu Wang, Wenjun Liao, Shichuan Zhang, Guotai Wang
{"title":"Head and Neck Tumor Segmentation of MRI from Pre- and Mid-Radiotherapy with Pre-Training, Data Augmentation and Dual Flow UNet.","authors":"Litingyu Wang, Wenjun Liao, Shichuan Zhang, Guotai Wang","doi":"10.1007/978-3-031-83274-1_5","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_5","url":null,"abstract":"<p><p>Head and neck tumors and metastatic lymph nodes are crucial for treatment planning and prognostic analysis. Accurate segmentation and quantitative analysis of these structures require pixel-level annotation, making automated segmentation techniques essential for the diagnosis and treatment of head and neck cancer. In this study, we investigated the effects of multiple strategies on the segmentation of pre-radiotherapy (pre-RT) and mid-radiotherapy (mid-RT) images. For the segmentation of pre-RT images, we utilized: 1) a fully supervised learning approach, and 2) the same approach enhanced with pre-trained weights and the MixUp data augmentation technique. For mid-RT images, we introduced a novel computational-friendly network architecture that features separate encoders for mid-RT images and registered pre-RT images with their labels. The mid-RT encoder branch integrates information from pre-RT images and labels progressively during the forward propagation. We selected the highest-performing model from each fold and used their predictions to create an ensemble average for inference. In the final test, our models achieved a segmentation performance of 82.38% for pre-RT and 72.53% for mid-RT on aggregated Dice Similarity Coefficient (DSC) as HiLab. Our code is available at https://github.com/WltyBY/HNTS-MRG2024_train_code.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"75-86"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12022123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble Deep Learning Models for Automated Segmentation of Tumor and Lymph Node Volumes in Head and Neck Cancer Using Pre- and Mid-Treatment MRI: Application of Auto3DSeg and SegResNet. 使用治疗前和治疗中MRI自动分割头颈癌肿瘤和淋巴结体积的集成深度学习模型:Auto3DSeg和SegResNet的应用
Dominic LaBella
{"title":"Ensemble Deep Learning Models for Automated Segmentation of Tumor and Lymph Node Volumes in Head and Neck Cancer Using Pre- and Mid-Treatment MRI: Application of Auto3DSeg and SegResNet.","authors":"Dominic LaBella","doi":"10.1007/978-3-031-83274-1_21","DOIUrl":"10.1007/978-3-031-83274-1_21","url":null,"abstract":"<p><p>Automated segmentation of gross tumor volumes (GTVp) and lymph nodes (GTVn) in head and neck cancer using MRI presents a critical challenge with significant potential to enhance radiation oncology workflows. In this study, we developed a deep learning pipeline based on the SegResNet architecture, integrated into the Auto3DSeg framework, to achieve fully-automated segmentation on pre-treatment (pre-RT) and mid-treatment (mid-RT) MRI scans as part of the DLaBella29 team submission to the HNTS-MRG 2024 challenge. For Task 1, we used an ensemble of six SegResNet models with predictions fused via weighted majority voting. The models were pre-trained on both pre-RT and mid-RT image-mask pairs, then fine-tuned on pre-RT data, without any pre-processing. For Task 2, an ensemble of five SegResNet models was employed, with predictions fused using majority voting. Pre-processing for Task 2 involved setting all voxels more than 1 cm from the registered pre-RT masks to background (value 0), followed by applying a bounding box to the image. Post-processing for both tasks included removing tumor predictions smaller than 175-200 mm<sup>3</sup> and node predictions under 50-60 mm<sup>3</sup>. Our models achieved testing DSCagg scores of 0.72 and 0.82 for GTVn and GTVp in Task 1 (pre-RT MRI) and testing DSCagg scores of 0.81 and 0.49 for GTVn and GTVp in Task 2 (mid-RT MRI). This study underscores the feasibility and promise of deep learning-based auto-segmentation for improving clinical workflows in radiation oncology, particularly in adaptive radiotherapy. Future efforts will focus on refining mid-RT segmentation performance and further investigating the clinical implications of automated tumor delineation.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"259-273"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11978229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Head and Neck Tumor Segmentation in MRI: The Impact of Image Preprocessing and Model Ensembling. 增强MRI头颈部肿瘤分割:图像预处理和模型集成的影响。
Mehdi Astaraki, Iuliana Toma-Dasu
{"title":"Enhancing Head and Neck Tumor Segmentation in MRI: The Impact of Image Preprocessing and Model Ensembling.","authors":"Mehdi Astaraki, Iuliana Toma-Dasu","doi":"10.1007/978-3-031-83274-1_8","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_8","url":null,"abstract":"<p><p>The adoption of online adaptive MR-guided radiotherapy (MRgRT) for Head and Neck Cancer (HNC) treatment faces challenges due to the complexity of manual HNC tumor delineation. This study focused on the problem of HNC tumor segmentation and investigated the effects of different preprocessing techniques, robust segmentation models, and ensembling steps on segmentation accuracy to propose an optimal solution. We contributed to the MICCAI Head and Neck Tumor Segmentation for MR-Guided Applications (HNTS-MRG) challenge which contains segmentation of HNC tumors in Task1) pre-RT and Task2) mid-RT MR images. In the internal validation phase, the most accurate results were achieved by ensembling two models trained on maximally cropped and contrast-enhanced images which yielded average volumetric Dice scores of (0.680, 0.785) and (0.493, 0.810) for (GTVp, GTVn) on pre-RT and mid-RT volumes. For the final testing phase, the models were submitted under the team's name of \"Stockholm_Trio\" and the overall segmentation performance achieved aggregated Dice scores of (0.795, 0.849) and (0.553, 0.865) for pre- and mid-RT tasks, respectively. The developed models are available at https://github.com/Astarakee/miccai24.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"112-122"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12053515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmark of Deep Encoder-Decoder Architectures for Head and Neck Tumor Segmentation in Magnetic Resonance Images: Contribution to the HNTSMRG Challenge. 磁共振图像中头颈部肿瘤分割的深度编码器-解码器架构基准:对HNTSMRG挑战的贡献。
Marek Wodzinski
{"title":"Benchmark of Deep Encoder-Decoder Architectures for Head and Neck Tumor Segmentation in Magnetic Resonance Images: Contribution to the HNTSMRG Challenge.","authors":"Marek Wodzinski","doi":"10.1007/978-3-031-83274-1_15","DOIUrl":"10.1007/978-3-031-83274-1_15","url":null,"abstract":"<p><p>Radiation therapy is one of the most frequently applied cancer treatments worldwide, especially in the context of head and neck cancer. Today, MRI-guided radiation therapy planning is becoming increasingly popular due to good soft tissue contrast, lack of radiation dose delivered to the patient, and the capability of performing functional imaging. However, MRI-guided radiation therapy requires segmenting of the cancer both before and during radiation therapy. So far, the segmentation was often performed manually by experienced radiologists, however, recent advances in deep learning-based segmentation suggest that it may be possible to perform the segmentation automatically. Nevertheless, the task is arguably more difficult when using MRI compared to e.g. PET-CT because even manual segmentation of head and neck cancer in MRI volumes is challenging and time-consuming. The importance of the problem motivated the researchers to organize the HNTSMRG challenge with the aim of developing the most accurate segmentation methods, both before and during MRI-guided radiation therapy. In this work, we benchmark several different state-of-the-art segmentation architectures to verify whether the recent advances in deep encoder-decoder architectures are impactful for low data regimes and low-contrast tasks like segmenting head and neck cancer in magnetic resonance images. We show that for such cases the traditional residual UNetbased method outperforms (DSC = 0.775/0.701) recent advances such as UNETR (DSC = .617/0.657), SwinUNETR (DSC = 0.757/0.700), or SegMamba (DSC = 0.708/0.683). The proposed method (lWM team) achieved a mean aggregated Dice score on the closed test set at the level of 0.771 and 0.707 for the pre- and mid-therapy segmentation tasks, scoring 14th and 6th place, respectively. The results suggest that proper data preparation, objective function, and preprocessing are more influential for the segmentation of head and neck cancer than deep network architecture.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"204-213"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11977277/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of 3D nnU-Net with Residual Encoder in the 2024 MICCAI Head and Neck Tumor Segmentation Challenge. 带有残差编码器的三维nnU-Net在2024 MICCAI头颈部肿瘤分割挑战赛中的应用。
Kaiyuan Ji, Zhihan Wu, Jing Han, Jun Jia, Guangtao Zhai, Jiannan Liu
{"title":"Application of 3D nnU-Net with Residual Encoder in the 2024 MICCAI Head and Neck Tumor Segmentation Challenge.","authors":"Kaiyuan Ji, Zhihan Wu, Jing Han, Jun Jia, Guangtao Zhai, Jiannan Liu","doi":"10.1007/978-3-031-83274-1_20","DOIUrl":"10.1007/978-3-031-83274-1_20","url":null,"abstract":"<p><p>This article explores the potential of deep learning technologies for the automated identification and delineation of primary tumor volumes (GTVp) and metastatic lymph nodes (GTVn) in radiation therapy planning, specifically using MRI data. Utilizing the high-quality dataset provided by the 2024 MICCAI Head and Neck Tumor Segmentation Challenge, this study employs the 3DnnU-Net model for automatic tumor segmentation. Our experiments revealed that the model performs poorly with high background ratios, which prompted a retraining with selected data of specific background ratios to improve segmentation performance. The results demonstrate that the model performs well on data with low background ratios, but optimization is still needed for high background ratios. Additionally, the model shows better performance in segmenting GTVn compared to GTVp, with DSCagg scores of 0.6381 and 0.8064 for Task 1 and Task 2, respectively, during the final test phase. Future work will focus on optimizing the model and adjusting the network architecture, aiming to enhance the segmentation of GTVp while maintaining the effectiveness of GTVn segmentation to increase accuracy and reliability in clinical applications.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"250-258"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144145267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head and Neck Gross Tumor Volume Automatic Segmentation Using PocketNet. 基于PocketNet的头颈部肿瘤体积自动分割。
Awj Twam, Adrian Celaya, Evan Lim, Khaled Elsayes, David Fuentes, Tucker Netherton
{"title":"Head and Neck Gross Tumor Volume Automatic Segmentation Using PocketNet.","authors":"Awj Twam, Adrian Celaya, Evan Lim, Khaled Elsayes, David Fuentes, Tucker Netherton","doi":"10.1007/978-3-031-83274-1_19","DOIUrl":"10.1007/978-3-031-83274-1_19","url":null,"abstract":"<p><p>Head and neck cancer (HNC) represents a significant global health burden, often requiring complex treatment strategies, including surgery, chemotherapy, and radiation therapy. Accurate delineation of tumor volumes is critical for effective treatment, particularly in MR-guided interventions, where soft tissue contrast enhances visualization of tumor boundaries. Manual segmentation of gross tumor volumes (GTV) is labor intensive, time-consuming and prone to variability, motivating the development of automated segmentation techniques. Convolutional neural networks (CNNs) have emerged as powerful tools in this task, offering significant improvements in speed and consistency. In this study, we participated as Team Pocket in Task 1 of the HNTS-MRG 2024 Grand Challenge, which focuses on the segmentation of gross tumor volumes of the primary tumor (GTVp) and the nodal tumor (GTVn) in pre-radiotherapy MR images for HNC. We evaluated the application of PocketNet, a lightweight CNN architecture, for this task. Results for the final test phase of the challenge show that PocketNet achieved an aggregated Dice Sorensen Coefficient (DSCagg) of 0.808 for GTVn and 0.732 for GTVp, with an overall mean performance of 0.77. These findings demonstrate the potential of PocketNet as an efficient and accurate solution for automated tumor segmentation in MR-guided HNC treatment workflows, with opportunities for further optimization to enhance performance.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"241-249"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12151156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144268394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing nnUNetv2 Training with Autoencoder Architecture for Improved Medical Image Segmentation. 利用自编码器架构增强nnUNetv2训练以改进医学图像分割。
Yichen An, Zhimin Wang, Eric Ma, Hao Jiang, Weiguo Lu
{"title":"Enhancing nnUNetv2 Training with Autoencoder Architecture for Improved Medical Image Segmentation.","authors":"Yichen An, Zhimin Wang, Eric Ma, Hao Jiang, Weiguo Lu","doi":"10.1007/978-3-031-83274-1_17","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_17","url":null,"abstract":"<p><p>Auto-segmentation of gross tumor volumes (GTVs) in head and neck cancer (HNC) using MRI-guided radiotherapy (RT) images presents a significant challenge that can greatly enhance clinical workflows in radiation oncology. In this study, we developed a novel deep learning model based on the nnUNetv2 framework, augmented with an autoencoder architecture. Our model introduces the original training images as an additional input channel and incorporates an MSE loss function to improve segmentation accuracy. The model was trained on a dataset of 150 HNC patients, with a private evaluation of 50 test patients as part of the HNTS-MRG 2024 challenge. The aggregated Dice similarity coefficient (DSCagg) for metastatic lymph nodes (GTVn) reached 0.8516, while the primary tumor (GTVp) scored 0.7318, with an average DSCagg of 0.7917 across both structures. By introducing an autoencoder output channel and combining dice loss with mean squared error (MSE) loss, the enhanced nnUNet architecture effectively learned additional image features to enhance segmentation accuracy. These findings suggest that deep learning models like our modified nnUNetv2 framework can significantly improve auto-segmentation accuracy in MRI-guided RT for HNC, contributing to more precise and efficient clinical workflows.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"222-229"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12053516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144039872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Longitudinal Gross Tumor Volume Segmentation in MRI-Guided Adaptive Radiotherapy for Head and Neck Cancer. 深度学习在mri引导的头颈部肿瘤自适应放疗中的纵向总体肿瘤体积分割。
Xin Tie, Weijie Chen, Zachary Huemann, Brayden Schott, Nuohao Liu, Tyler J Bradshaw
{"title":"Deep Learning for Longitudinal Gross Tumor Volume Segmentation in MRI-Guided Adaptive Radiotherapy for Head and Neck Cancer.","authors":"Xin Tie, Weijie Chen, Zachary Huemann, Brayden Schott, Nuohao Liu, Tyler J Bradshaw","doi":"10.1007/978-3-031-83274-1_7","DOIUrl":"https://doi.org/10.1007/978-3-031-83274-1_7","url":null,"abstract":"<p><p>Accurate segmentation of gross tumor volume (GTV) is essential for effective MRI-guided adaptive radiotherapy (MRgART) in head and neck cancer. However, manual segmentation of the GTV over the course of therapy is time-consuming and prone to interobserver variability. Deep learning (DL) has the potential to overcome these challenges by automatically delineating GTVs. In this study, our team, <i>UW LAIR</i>, tackled the challenges of both pre-radiotherapy (pre-RT) (Task 1) and mid-radiotherapy (mid-RT) (Task 2) tumor volume segmentation. To this end, we developed a series of DL models for longitudinal GTV segmentation. The backbone of our models for both tasks was SegResNet with deep supervision. For Task 1, we trained the model using a combined dataset of pre-RT and mid-RT MRI data, which resulted in the improved aggregated Dice similarity coefficient (DSC<sub>agg</sub>) on a hold-out internal testing set compared to models trained solely on pre-RT MRI data. In Task 2, we introduced mask-aware attention modules, enabling pre-RT GTV masks to influence intermediate features learned from mid-RT data. This attention-based approach yielded slight improvements over the baseline method, which concatenated mid-RT MRI with pre-RT GTV masks as input. In the final testing phase, the ensemble of 10 pre-RT segmentation models achieved an average DSC<sub>agg</sub> of 0.794, with 0.745 for primary GTV (GTVp) and 0.844 for metastatic lymph nodes (GTVn) in Task 1. For Task 2, the ensemble of 10 mid-RT segmentation models attained an average DSC<sub>agg</sub> of 0.733, with 0.607 for GTVp and 0.859 for GTVn, leading us to achieve 1st place. In summary, we presented a collection of DL models that could facilitate GTV segmentation in MRgART, offering the potential to streamline radiation oncology workflows.</p>","PeriodicalId":520475,"journal":{"name":"Head and Neck Tumor Segmentation for MR-Guided Applications : First MICCAI Challenge, HNTS-MRG 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 17, 2024, proceedings","volume":"15273 ","pages":"99-111"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12036643/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信