M. Naser, K. Wahid, L. V. Dijk, R. He, M. A. Abdelaal, C. Dede, A. Mohamed, C. Fuller
{"title":"基于深度学习模型集成的PET-CT图像头颈部肿瘤自动分割","authors":"M. Naser, K. Wahid, L. V. Dijk, R. He, M. A. Abdelaal, C. Dede, A. Mohamed, C. Fuller","doi":"10.1101/2021.10.14.21264953","DOIUrl":null,"url":null,"abstract":"Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that are able to demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation, with future investigations targeting the ideal combination of channel combinations and label fusion strategies to maximize seg-mentation performance.","PeriodicalId":93561,"journal":{"name":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","volume":"117 1","pages":"121-132"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Head and Neck Cancer Primary Tumor Auto Segmentation using Model Ensembling of Deep Learning in PET-CT Images\",\"authors\":\"M. Naser, K. Wahid, L. V. Dijk, R. He, M. A. Abdelaal, C. Dede, A. Mohamed, C. Fuller\",\"doi\":\"10.1101/2021.10.14.21264953\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that are able to demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation, with future investigations targeting the ideal combination of channel combinations and label fusion strategies to maximize seg-mentation performance.\",\"PeriodicalId\":93561,\"journal\":{\"name\":\"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...\",\"volume\":\"117 1\",\"pages\":\"121-132\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2021.10.14.21264953\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Head and neck tumor segmentation and outcome prediction : second challenge, HECKTOR 2021, held in conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings. Head and Neck Tumor Segmentation Challenge (2nd : 2021 ...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2021.10.14.21264953","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Head and Neck Cancer Primary Tumor Auto Segmentation using Model Ensembling of Deep Learning in PET-CT Images
Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that are able to demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation, with future investigations targeting the ideal combination of channel combinations and label fusion strategies to maximize seg-mentation performance.