Volumetric scout CT images reconstructed from conventional two-view radiograph localizers using deep learning (Conference Presentation)

J. Montoya, Chengzhu Zhang, Ke Li, Guang-Hong Chen
{"title":"Volumetric scout CT images reconstructed from conventional two-view radiograph localizers using deep learning (Conference Presentation)","authors":"J. Montoya, Chengzhu Zhang, Ke Li, Guang-Hong Chen","doi":"10.1117/12.2513133","DOIUrl":null,"url":null,"abstract":"In this work, a deep neural network architecture was developed and trained to reconstruct volumetric CT images from two-view radiograph scout localizers. In clinical CT exams, each patient will receive a two-view scout scan to generate both lateral (LAT) and anterior-posterior (AP) radiographs to help CT technologist to prescribe scanning parameters. After that, patients go through CT scans to generate CT images for clinical diagnosis. Therefore, for each patient, we will have two-view radiographs as input data set and the corresponding CT images as output to form our training data set. In this work, more than 1.1 million diagnostic CT images and their corresponding projection data from 4214 clinically indicated CT studies were retrospectively collected. The dataset was used to train a deep neural network which inputs the AP and LAT projections and outputs a volumetric CT localizer. Once the model was trained, 3D localizers were reconstructed for a validation cohort and results were analyzed and compared with the standard MDCT images. In particular, we were interested in the use of 3D localizers for the purpose of optimizing tube current modulation schemes, therefore we compared water equivalent diameters (Dw), radiologic paths and radiation dose distributions. The quantitative evaluation yields the following results: The mean±SD percent difference in Dw was 0.6±4.7% in 3D localizers compared to the Dw measured from the conventional CT reconstructions. 3D localizers showed excellent agreement in radiologic path measurements. Gamma analysis of radiation dose distributions indicated a 97.3%, 97.3% and 98.2% of voxels with passing gamma index for anatomical regions in the chest, abdomen and pelvis respectively. These results demonstrate the great success of the developed deep learning reconstruction method to generate volumetric scout CT image volumes.","PeriodicalId":151764,"journal":{"name":"Medical Imaging 2019: Physics of Medical Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Imaging 2019: Physics of Medical Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2513133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this work, a deep neural network architecture was developed and trained to reconstruct volumetric CT images from two-view radiograph scout localizers. In clinical CT exams, each patient will receive a two-view scout scan to generate both lateral (LAT) and anterior-posterior (AP) radiographs to help CT technologist to prescribe scanning parameters. After that, patients go through CT scans to generate CT images for clinical diagnosis. Therefore, for each patient, we will have two-view radiographs as input data set and the corresponding CT images as output to form our training data set. In this work, more than 1.1 million diagnostic CT images and their corresponding projection data from 4214 clinically indicated CT studies were retrospectively collected. The dataset was used to train a deep neural network which inputs the AP and LAT projections and outputs a volumetric CT localizer. Once the model was trained, 3D localizers were reconstructed for a validation cohort and results were analyzed and compared with the standard MDCT images. In particular, we were interested in the use of 3D localizers for the purpose of optimizing tube current modulation schemes, therefore we compared water equivalent diameters (Dw), radiologic paths and radiation dose distributions. The quantitative evaluation yields the following results: The mean±SD percent difference in Dw was 0.6±4.7% in 3D localizers compared to the Dw measured from the conventional CT reconstructions. 3D localizers showed excellent agreement in radiologic path measurements. Gamma analysis of radiation dose distributions indicated a 97.3%, 97.3% and 98.2% of voxels with passing gamma index for anatomical regions in the chest, abdomen and pelvis respectively. These results demonstrate the great success of the developed deep learning reconstruction method to generate volumetric scout CT image volumes.
利用深度学习从传统的双视图x线片定位器重建体积侦察CT图像(会议报告)
在这项工作中,开发并训练了一种深度神经网络架构,用于从双视图x线片侦察定位器重建体积CT图像。在临床CT检查中,每位患者将接受双视图scout扫描,生成侧位(LAT)和前后位(AP) x线片,以帮助CT技术人员规定扫描参数。然后对患者进行CT扫描,生成用于临床诊断的CT图像。因此,对于每个患者,我们将以双视图x线片作为输入数据集,并将相应的CT图像作为输出,形成我们的训练数据集。在这项工作中,回顾性收集了超过110万的诊断CT图像及其相应的投影数据,这些数据来自4214个临床显示的CT研究。该数据集用于训练深度神经网络,该网络输入AP和LAT投影并输出体积CT定位器。一旦模型被训练,3D定位器被重建用于验证队列,结果被分析并与标准MDCT图像进行比较。特别是,我们对使用3D定位器来优化管电流调制方案感兴趣,因此我们比较了水当量直径(Dw)、放射路径和辐射剂量分布。定量评估得出以下结果:与传统CT重建测量的Dw相比,3D定位仪测量的Dw平均±SD百分比差异为0.6±4.7%。三维定位仪在放射路径测量中表现出极好的一致性。放射剂量分布伽玛分析显示,胸部、腹部和骨盆解剖区伽玛指数合格的体素分别为97.3%、97.3%和98.2%。这些结果表明,所开发的深度学习重建方法在生成体积侦察CT图像体积方面取得了巨大的成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信