Iteration-Dependent Networks and Losses for Unrolled Deep Learned FBSEM PET Image Reconstruction

Guillaume Corda-D’Incan, J. Schnabel, A. Reader
{"title":"Iteration-Dependent Networks and Losses for Unrolled Deep Learned FBSEM PET Image Reconstruction","authors":"Guillaume Corda-D’Incan, J. Schnabel, A. Reader","doi":"10.1109/NSS/MIC42677.2020.9507780","DOIUrl":null,"url":null,"abstract":"We present here an enhanced version of FBSEM-Net, a deep learned regularised model-based image reconstruction algorithm. FBSEM-Net unrolls the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learnt by the network from training data. Nonetheless, some issues arise from its original implementation that we improve upon in this work to obtain a more practical implementation. Specifically, in this implementation, two theoretical improvements are included: i) iteration-dependent networks are used which allows adaptation to varying noise levels as the number of iterations evolves, ii) iteration-dependent targets are used, so that the deep learnt regulariser remains a pure denoising step without any artificial acceleration of the algorithm. Furthermore, we present a new sequential training method for fully unrolled deep networks where the iterative reconstruction is split and the network is trained on each of its modules separately to match the total number of iterations used to reconstruct the targets. The results obtained on 2D simulated test data show that FBSEM-Net using iteration-dependent networks outperforms the original version. Additionally, we found that using iteration-dependent targets not only helps to reduce the variance for different training runs of the network, thus offering greater stability, but also gives the possibility of using a lower number of iterations for test time than what was used for training. Ultimately, we demonstrate that sequential training successfully addresses potential memory issues raised during the training of unrolled networks, without notably impacting the network's performance compared to conventional training.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"2 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We present here an enhanced version of FBSEM-Net, a deep learned regularised model-based image reconstruction algorithm. FBSEM-Net unrolls the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learnt by the network from training data. Nonetheless, some issues arise from its original implementation that we improve upon in this work to obtain a more practical implementation. Specifically, in this implementation, two theoretical improvements are included: i) iteration-dependent networks are used which allows adaptation to varying noise levels as the number of iterations evolves, ii) iteration-dependent targets are used, so that the deep learnt regulariser remains a pure denoising step without any artificial acceleration of the algorithm. Furthermore, we present a new sequential training method for fully unrolled deep networks where the iterative reconstruction is split and the network is trained on each of its modules separately to match the total number of iterations used to reconstruct the targets. The results obtained on 2D simulated test data show that FBSEM-Net using iteration-dependent networks outperforms the original version. Additionally, we found that using iteration-dependent targets not only helps to reduce the variance for different training runs of the network, thus offering greater stability, but also gives the possibility of using a lower number of iterations for test time than what was used for training. Ultimately, we demonstrate that sequential training successfully addresses potential memory issues raised during the training of unrolled networks, without notably impacting the network's performance compared to conventional training.
展开深度学习FBSEM PET图像重建的迭代依赖网络和损失
本文提出了FBSEM-Net的增强版本,这是一种基于深度学习的正则化模型的图像重建算法。FBSEM-Net展开了最大后验期望最大化算法,并用残差卷积神经网络取代正则化步骤。网络从训练数据中学习先验梯度和正则化强度。尽管如此,我们在本工作中对其原始实现进行了改进,以获得更实际的实现,从而产生了一些问题。具体来说,在这个实现中,包括两个理论改进:i)使用迭代依赖网络,允许随着迭代次数的发展而适应不同的噪声水平,ii)使用迭代依赖目标,因此深度学习的正则化器仍然是一个纯粹的去噪步骤,而不需要任何人工加速算法。此外,我们提出了一种新的序列训练方法,用于完全展开深度网络,其中迭代重建被分割,网络在其每个模块上分别进行训练,以匹配用于重建目标的总迭代次数。在二维模拟测试数据上得到的结果表明,使用迭代依赖网络的FBSEM-Net优于原始版本。此外,我们发现使用与迭代相关的目标不仅有助于减少网络不同训练运行的方差,从而提供更大的稳定性,而且还提供了使用比用于训练的迭代次数更少的测试时间的可能性。最后,我们证明了顺序训练成功地解决了在展开网络训练过程中产生的潜在记忆问题,与传统训练相比,没有明显影响网络的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信