基于深度学习的PET图像头颈部恶性病变自动圈定

H. Arabi, Isaac Shiri, E. Jenabi, M. Becker, H. Zaidi
{"title":"基于深度学习的PET图像头颈部恶性病变自动圈定","authors":"H. Arabi, Isaac Shiri, E. Jenabi, M. Becker, H. Zaidi","doi":"10.1109/NSS/MIC42677.2020.9507977","DOIUrl":null,"url":null,"abstract":"Accurate delineation of the gross tumor volume (GTV) is critical for treatment planning in radiation oncology. This task is very challenging owing to the irregular and diverse shapes of malignant lesions. Manual delineation of the GTVs on PET images is not only time-consuming but also suffers from inter- and intra-observer variability. In this work, we developed deep learning-based approaches for automated GTV delineation on PET images of head and neck cancer patients. To this end, V-Net, a fully convolutional neural network for volumetric medical image segmentation, and HighResNet, a 20-layer residual convolutional neural network, were adopted. 18F-FDG-PET/CT images of 510 patients presenting with head and neck cancer on which manually defined (reference) GTVs were utilized for training, evaluation and testing of these algorithms. The input of these networks (in both training or evaluation phases) were 12×12×12 cm sub-volumes of PET images containing the whole volume of the tumors and the neighboring background radiotracer uptake. These networks were trained to generate a binary mask representing the GTV on the input PET subvolume. Standard segmentation metrics, including Dice similarity and precision were used for performance assessment of these algorithms. HighResNet achieved automated GTV delineation with a Dice index of 0.87±0.04 compared to 0.86±0.06 achieved by V-Net. Despite the close performance of these two approaches, HighResNet exhibited less variability among different subjects as reflected in the smaller standard deviation and significantly higher precision index (0.87±0.07 versus 0.80±0.10). Deep learning techniques, in particular HighResNet algorithm, exhibited promising performance for automated GTV delineation on head and neck PET images. Incorporation of anatomical/structural information, particularly MRI, may result in higher segmentation accuracy or less variability among the different subjects.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"44 1","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Deep Learning-based Automated Delineation of Head and Neck Malignant Lesions from PET Images\",\"authors\":\"H. Arabi, Isaac Shiri, E. Jenabi, M. Becker, H. Zaidi\",\"doi\":\"10.1109/NSS/MIC42677.2020.9507977\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Accurate delineation of the gross tumor volume (GTV) is critical for treatment planning in radiation oncology. This task is very challenging owing to the irregular and diverse shapes of malignant lesions. Manual delineation of the GTVs on PET images is not only time-consuming but also suffers from inter- and intra-observer variability. In this work, we developed deep learning-based approaches for automated GTV delineation on PET images of head and neck cancer patients. To this end, V-Net, a fully convolutional neural network for volumetric medical image segmentation, and HighResNet, a 20-layer residual convolutional neural network, were adopted. 18F-FDG-PET/CT images of 510 patients presenting with head and neck cancer on which manually defined (reference) GTVs were utilized for training, evaluation and testing of these algorithms. The input of these networks (in both training or evaluation phases) were 12×12×12 cm sub-volumes of PET images containing the whole volume of the tumors and the neighboring background radiotracer uptake. These networks were trained to generate a binary mask representing the GTV on the input PET subvolume. Standard segmentation metrics, including Dice similarity and precision were used for performance assessment of these algorithms. HighResNet achieved automated GTV delineation with a Dice index of 0.87±0.04 compared to 0.86±0.06 achieved by V-Net. Despite the close performance of these two approaches, HighResNet exhibited less variability among different subjects as reflected in the smaller standard deviation and significantly higher precision index (0.87±0.07 versus 0.80±0.10). Deep learning techniques, in particular HighResNet algorithm, exhibited promising performance for automated GTV delineation on head and neck PET images. Incorporation of anatomical/structural information, particularly MRI, may result in higher segmentation accuracy or less variability among the different subjects.\",\"PeriodicalId\":6760,\"journal\":{\"name\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"volume\":\"44 1\",\"pages\":\"1-3\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NSS/MIC42677.2020.9507977\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507977","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

准确描述肿瘤总体积(GTV)是放射肿瘤学治疗计划的关键。由于恶性病变的形状不规则且多样,这项任务非常具有挑战性。人工圈定PET图像上的gtv不仅耗时,而且存在观察者之间和观察者内部的可变性。在这项工作中,我们开发了基于深度学习的方法,用于对头颈癌患者的PET图像进行自动GTV描绘。为此,采用体积医学图像分割的全卷积神经网络V-Net和20层残差卷积神经网络HighResNet。对510例头颈癌患者的18F-FDG-PET/CT图像,使用人工定义的(参考)gtv对这些算法进行训练、评估和测试。这些网络(在训练或评估阶段)的输入是含有整个肿瘤体积和邻近背景放射性示踪剂摄取的PET图像的12×12×12 cm亚体积。对这些网络进行训练,生成一个表示输入PET子卷上GTV的二进制掩码。使用标准分割指标,包括骰子相似度和精度来评估这些算法的性能。HighResNet实现了自动GTV描绘,其Dice指数为0.87±0.04,而V-Net的Dice指数为0.86±0.06。尽管这两种方法的性能接近,但HighResNet在不同受试者之间的可变性较小,反映在较小的标准差和显著更高的精度指数上(0.87±0.07 vs 0.80±0.10)。深度学习技术,特别是HighResNet算法,在头颈部PET图像的自动GTV描绘方面表现出了很好的性能。结合解剖/结构信息,特别是MRI,可能导致更高的分割准确性或减少不同受试者之间的差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Learning-based Automated Delineation of Head and Neck Malignant Lesions from PET Images
Accurate delineation of the gross tumor volume (GTV) is critical for treatment planning in radiation oncology. This task is very challenging owing to the irregular and diverse shapes of malignant lesions. Manual delineation of the GTVs on PET images is not only time-consuming but also suffers from inter- and intra-observer variability. In this work, we developed deep learning-based approaches for automated GTV delineation on PET images of head and neck cancer patients. To this end, V-Net, a fully convolutional neural network for volumetric medical image segmentation, and HighResNet, a 20-layer residual convolutional neural network, were adopted. 18F-FDG-PET/CT images of 510 patients presenting with head and neck cancer on which manually defined (reference) GTVs were utilized for training, evaluation and testing of these algorithms. The input of these networks (in both training or evaluation phases) were 12×12×12 cm sub-volumes of PET images containing the whole volume of the tumors and the neighboring background radiotracer uptake. These networks were trained to generate a binary mask representing the GTV on the input PET subvolume. Standard segmentation metrics, including Dice similarity and precision were used for performance assessment of these algorithms. HighResNet achieved automated GTV delineation with a Dice index of 0.87±0.04 compared to 0.86±0.06 achieved by V-Net. Despite the close performance of these two approaches, HighResNet exhibited less variability among different subjects as reflected in the smaller standard deviation and significantly higher precision index (0.87±0.07 versus 0.80±0.10). Deep learning techniques, in particular HighResNet algorithm, exhibited promising performance for automated GTV delineation on head and neck PET images. Incorporation of anatomical/structural information, particularly MRI, may result in higher segmentation accuracy or less variability among the different subjects.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信