Smart contours: deep learning-driven internal gross tumor volume delineation in non-small cell lung cancer using 4D CT maximum and average intensity projections.
{"title":"Smart contours: deep learning-driven internal gross tumor volume delineation in non-small cell lung cancer using 4D CT maximum and average intensity projections.","authors":"Yuling Huang, Mingming Luo, Zan Luo, Mingzhi Liu, Junyu Li, Junming Jian, Yun Zhang","doi":"10.1186/s13014-025-02642-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Delineating the internal gross tumor volume (IGTV) is crucial for the treatment of non-small cell lung cancer (NSCLC). Deep learning (DL) enables the automation of this process; however, current studies focus mainly on multiple phases of four-dimensional (4D) computed tomography (CT), which leads to indirect results. This study proposed a DL-based method for automatic IGTV delineation using maximum and average intensity projections (MIP and AIP, respectively) from 4D CT.</p><p><strong>Methods: </strong>We retrospectively enrolled 124 patients with NSCLC and divided them into training (70%, n = 87) and validation (30%, n = 37) cohorts. Four-dimensional CT images were acquired, and the corresponding MIP and AIP images were generated. The IGTVs were contoured on 4D CT and used as the ground truth (GT). The MIP or AIP images, along with the corresponding IGTVs (IGTV<sub>MIP-manu</sub> and IGTV<sub>AIP-manu</sub>, respectively), were fed into the DL models for training and validation. We assessed the performance of three segmentation models-U-net, attention U-net, and V-net-using the Dice similarity coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95) as the primary metrics.</p><p><strong>Results: </strong>The attention U-net model trained on AIP images presented a mean DSC of 0.871 ± 0.048 and mean HD95 of 2.958 ± 2.266 mm, whereas the model trained on MIP images achieved a mean DSC of 0.852 ± 0.053 and mean HD95 of 3.209 ± 2.136 mm. Among the models, attention U-net and U-net achieved similar results, considerably surpassing V-net.</p><p><strong>Conclusions: </strong>DL models can automate IGTV delineation using MIP and AIP images, streamline contouring, and enhance the accuracy and consistency of lung cancer radiotherapy planning to improve patient outcomes.</p>","PeriodicalId":49639,"journal":{"name":"Radiation Oncology","volume":"20 1","pages":"59"},"PeriodicalIF":3.3000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12008886/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiation Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13014-025-02642-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Delineating the internal gross tumor volume (IGTV) is crucial for the treatment of non-small cell lung cancer (NSCLC). Deep learning (DL) enables the automation of this process; however, current studies focus mainly on multiple phases of four-dimensional (4D) computed tomography (CT), which leads to indirect results. This study proposed a DL-based method for automatic IGTV delineation using maximum and average intensity projections (MIP and AIP, respectively) from 4D CT.
Methods: We retrospectively enrolled 124 patients with NSCLC and divided them into training (70%, n = 87) and validation (30%, n = 37) cohorts. Four-dimensional CT images were acquired, and the corresponding MIP and AIP images were generated. The IGTVs were contoured on 4D CT and used as the ground truth (GT). The MIP or AIP images, along with the corresponding IGTVs (IGTVMIP-manu and IGTVAIP-manu, respectively), were fed into the DL models for training and validation. We assessed the performance of three segmentation models-U-net, attention U-net, and V-net-using the Dice similarity coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95) as the primary metrics.
Results: The attention U-net model trained on AIP images presented a mean DSC of 0.871 ± 0.048 and mean HD95 of 2.958 ± 2.266 mm, whereas the model trained on MIP images achieved a mean DSC of 0.852 ± 0.053 and mean HD95 of 3.209 ± 2.136 mm. Among the models, attention U-net and U-net achieved similar results, considerably surpassing V-net.
Conclusions: DL models can automate IGTV delineation using MIP and AIP images, streamline contouring, and enhance the accuracy and consistency of lung cancer radiotherapy planning to improve patient outcomes.
Radiation OncologyONCOLOGY-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
6.50
自引率
2.80%
发文量
181
审稿时长
3-6 weeks
期刊介绍:
Radiation Oncology encompasses all aspects of research that impacts on the treatment of cancer using radiation. It publishes findings in molecular and cellular radiation biology, radiation physics, radiation technology, and clinical oncology.