Deep Learning-based Low-dose Tomography Reconstruction with Hybrid-dose Measurements

IF 65.3 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ziling Wu, Tekin Bicer, Zhengchun Liu, V. De Andrade, Yunhui Zhu, Ian T Foster
{"title":"Deep Learning-based Low-dose Tomography Reconstruction with Hybrid-dose Measurements","authors":"Ziling Wu, Tekin Bicer, Zhengchun Liu, V. De Andrade, Yunhui Zhu, Ian T Foster","doi":"10.1109/MLHPCAI4S51975.2020.00017","DOIUrl":null,"url":null,"abstract":"Synchrotron-based X-ray computed tomography is widely used for investigating inner structures of specimens at high spatial resolutions. However, potential beam damage to samples often limits the X-ray exposure during tomography experiments. Proposed strategies for eliminating beam damage also decrease reconstruction quality. Here we present a deep learning-based method to enhance low-dose tomography reconstruction via a hybrid-dose acquisition strategy composed of extremely sparse-view normal-dose projections and full-view low-dose projections. Corresponding image pairs are extracted from low-/normal-dose projections to train a deep convolutional neural network, which is then applied to enhance full-view noisy low-dose projections. Evaluation on two experimental datasets under different hybrid-dose acquisition conditions show significantly improved structural details and reduced noise levels compared to uniformly distributed acquisitions with the same number of total dosage. The resulting reconstructions also preserve more structural information than reconstructions processed with traditional analytical and regularization-based iterative reconstruction methods from uniform acquisitions. Our performance comparisons show that our implementation, HDrec, can perform denoising of a real-world experimental data 410x faster than the state-of-the-art X-learn method while providing better quality. This framework can be applied to other tomographic or scanning based X-ray imaging techniques for enhanced analysis of dose-sensitive samples and has great potential for studying fast dynamic processes.","PeriodicalId":47667,"journal":{"name":"Foundations and Trends in Machine Learning","volume":"96 1","pages":"88-95"},"PeriodicalIF":65.3000,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLHPCAI4S51975.2020.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 5

Abstract

Synchrotron-based X-ray computed tomography is widely used for investigating inner structures of specimens at high spatial resolutions. However, potential beam damage to samples often limits the X-ray exposure during tomography experiments. Proposed strategies for eliminating beam damage also decrease reconstruction quality. Here we present a deep learning-based method to enhance low-dose tomography reconstruction via a hybrid-dose acquisition strategy composed of extremely sparse-view normal-dose projections and full-view low-dose projections. Corresponding image pairs are extracted from low-/normal-dose projections to train a deep convolutional neural network, which is then applied to enhance full-view noisy low-dose projections. Evaluation on two experimental datasets under different hybrid-dose acquisition conditions show significantly improved structural details and reduced noise levels compared to uniformly distributed acquisitions with the same number of total dosage. The resulting reconstructions also preserve more structural information than reconstructions processed with traditional analytical and regularization-based iterative reconstruction methods from uniform acquisitions. Our performance comparisons show that our implementation, HDrec, can perform denoising of a real-world experimental data 410x faster than the state-of-the-art X-learn method while providing better quality. This framework can be applied to other tomographic or scanning based X-ray imaging techniques for enhanced analysis of dose-sensitive samples and has great potential for studying fast dynamic processes.
基于混合剂量测量的深度学习低剂量断层扫描重建
基于同步加速器的x射线计算机断层扫描被广泛用于高空间分辨率的标本内部结构研究。然而,在断层扫描实验中,对样品的潜在光束损伤往往限制了x射线的暴露。所提出的消除光束损伤的策略也会降低重建质量。在这里,我们提出了一种基于深度学习的方法,通过由极稀疏视图正常剂量投影和全视图低剂量投影组成的混合剂量获取策略来增强低剂量断层扫描重建。从低/正常剂量投影中提取相应的图像对,训练深度卷积神经网络,然后将其应用于增强全视图噪声低剂量投影。对不同混合剂量采集条件下的两个实验数据集的评估表明,与相同总剂量数量的均匀分布采集相比,结构细节得到了显著改善,噪声水平得到了显著降低。与传统的基于统一采集的分析和正则化迭代重建方法相比,由此产生的重建也保留了更多的结构信息。我们的性能比较表明,我们的实现HDrec可以比最先进的X-learn方法快410倍地执行真实实验数据的去噪,同时提供更好的质量。该框架可应用于其他层析或基于扫描的x射线成像技术,以增强对剂量敏感样品的分析,并具有研究快速动态过程的巨大潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Foundations and Trends in Machine Learning
Foundations and Trends in Machine Learning COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
108.50
自引率
0.00%
发文量
5
期刊介绍: Each issue of Foundations and Trends® in Machine Learning comprises a monograph of at least 50 pages written by research leaders in the field. We aim to publish monographs that provide an in-depth, self-contained treatment of topics where there have been significant new developments. Typically, this means that the monographs we publish will contain a significant level of mathematical detail (to describe the central methods and/or theory for the topic at hand), and will not eschew these details by simply pointing to existing references. Literature surveys and original research papers do not fall within these aims.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信