Noise2Void PET图像去噪

Tzu-An Song, J. Dutta
{"title":"Noise2Void PET图像去噪","authors":"Tzu-An Song, J. Dutta","doi":"10.1109/NSS/MIC42677.2020.9507875","DOIUrl":null,"url":null,"abstract":"Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"1 1","pages":"1-2"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Noise2Void Denoising of PET Images\",\"authors\":\"Tzu-An Song, J. Dutta\",\"doi\":\"10.1109/NSS/MIC42677.2020.9507875\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.\",\"PeriodicalId\":6760,\"journal\":{\"name\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"volume\":\"1 1\",\"pages\":\"1-2\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NSS/MIC42677.2020.9507875\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

由于图像中的高水平噪声,PET图像的定性和定量解释通常是一项具有挑战性的任务。虽然基于卷积神经网络的深度学习架构在PET图像去噪方面产生了前所未有的准确性,但大多数现有方法都需要包含损坏和干净图像对的大型训练数据集,这在许多临床应用中通常是不可用的。Noise2Noise技术消除了对干净目标图像的需求,但引入了对每个损坏输入的两个噪声实现的要求。在本文中,我们提出了一种基于Noise2Void范式的PET去噪技术,该技术只需要单个带噪图像进行训练,从而确保了更广泛的适用性和可接受性。在训练阶段,单个带噪声的PET图像同时作为输入和目标。基于BrainWeb虚拟样机的仿真数据验证了该方法的有效性。我们的结果表明,它在不同噪声水平的训练和验证阶段产生相当的性能。此外,即使验证输入与训练输入具有不同的计数水平,其性能也保持鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Noise2Void Denoising of PET Images
Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信