基于深度学习的虚拟非对比CT图像预测

Roman Jakubícek, Tomáš Vičar, Jiří Chmelík, P. Ourednicek, J. Jan
{"title":"基于深度学习的虚拟非对比CT图像预测","authors":"Roman Jakubícek, Tomáš Vičar, Jiří Chmelík, P. Ourednicek, J. Jan","doi":"10.1145/3459104.3460237","DOIUrl":null,"url":null,"abstract":"In this paper, we present a method, based on deep learning, for prediction of non-contrast CT image from a single contrast image. For training of this image-to-image translation task, virtual contrast and virtual non-contrast (VNC) images were created from spectral CT data by Philips IntelliSpace Portal (ISP) software. Virtual version of conventional CT (cCT) images and VNC images allows to train paired supervised image-to-image translation models. Two different schemes were tested to train the Convolutional Neural Network (CNN) with U-Net architecture, using standard training with L1/L2 loss as well as training via conditional Generative Adversarial Network (cGAN) with an additional Wasserstein modification (WcGAN). Qualitatively, the proposed method achieves similar results to the original VNC images. However, quantitatively, the trained CNN provides a slightly smaller density reduction in some tissues. Non-contrast image can be predicted from a single conventional CT image, without the need for pre- and post-contrast scan or without a spectral CT scan.","PeriodicalId":142284,"journal":{"name":"2021 International Symposium on Electrical, Electronics and Information Engineering","volume":"208 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Deep-learning Based Prediction of Virtual Non-contrast CT Images\",\"authors\":\"Roman Jakubícek, Tomáš Vičar, Jiří Chmelík, P. Ourednicek, J. Jan\",\"doi\":\"10.1145/3459104.3460237\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a method, based on deep learning, for prediction of non-contrast CT image from a single contrast image. For training of this image-to-image translation task, virtual contrast and virtual non-contrast (VNC) images were created from spectral CT data by Philips IntelliSpace Portal (ISP) software. Virtual version of conventional CT (cCT) images and VNC images allows to train paired supervised image-to-image translation models. Two different schemes were tested to train the Convolutional Neural Network (CNN) with U-Net architecture, using standard training with L1/L2 loss as well as training via conditional Generative Adversarial Network (cGAN) with an additional Wasserstein modification (WcGAN). Qualitatively, the proposed method achieves similar results to the original VNC images. However, quantitatively, the trained CNN provides a slightly smaller density reduction in some tissues. Non-contrast image can be predicted from a single conventional CT image, without the need for pre- and post-contrast scan or without a spectral CT scan.\",\"PeriodicalId\":142284,\"journal\":{\"name\":\"2021 International Symposium on Electrical, Electronics and Information Engineering\",\"volume\":\"208 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Symposium on Electrical, Electronics and Information Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3459104.3460237\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Symposium on Electrical, Electronics and Information Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459104.3460237","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在本文中,我们提出了一种基于深度学习的方法,用于从单个对比度图像中预测非对比度CT图像。为了训练这个图像到图像的转换任务,Philips IntelliSpace Portal (ISP)软件从光谱CT数据创建了虚拟对比度和虚拟非对比度(VNC)图像。传统CT (cCT)图像和VNC图像的虚拟版本允许训练成对的监督图像到图像的翻译模型。测试了两种不同的方案来训练具有U-Net架构的卷积神经网络(CNN),使用L1/L2损失的标准训练以及通过附加Wasserstein修改(WcGAN)的条件生成对抗网络(cGAN)进行训练。定性地说,所提出的方法获得了与原始VNC映像相似的结果。然而,在定量上,训练后的CNN在某些组织中提供了稍小的密度降低。非对比图像可以从单一的常规CT图像预测,不需要前后对比扫描或不需要频谱CT扫描。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep-learning Based Prediction of Virtual Non-contrast CT Images
In this paper, we present a method, based on deep learning, for prediction of non-contrast CT image from a single contrast image. For training of this image-to-image translation task, virtual contrast and virtual non-contrast (VNC) images were created from spectral CT data by Philips IntelliSpace Portal (ISP) software. Virtual version of conventional CT (cCT) images and VNC images allows to train paired supervised image-to-image translation models. Two different schemes were tested to train the Convolutional Neural Network (CNN) with U-Net architecture, using standard training with L1/L2 loss as well as training via conditional Generative Adversarial Network (cGAN) with an additional Wasserstein modification (WcGAN). Qualitatively, the proposed method achieves similar results to the original VNC images. However, quantitatively, the trained CNN provides a slightly smaller density reduction in some tissues. Non-contrast image can be predicted from a single conventional CT image, without the need for pre- and post-contrast scan or without a spectral CT scan.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信