{"title":"Noise2Void Denoising of PET Images","authors":"Tzu-An Song, J. Dutta","doi":"10.1109/NSS/MIC42677.2020.9507875","DOIUrl":null,"url":null,"abstract":"Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.","PeriodicalId":6760,"journal":{"name":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","volume":"1 1","pages":"1-2"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NSS/MIC42677.2020.9507875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Qualitative and quantitative interpretation of PET images is often a challenging task due to high levels of noise in the images. While deep learning architectures based on convolutional neural networks have produced unprecedented accuracy at denoising PET images, most existing approaches require large training datasets with corrupt and clean image pairs, which are often unavailable for many clinical applications. The Noise2Noise technique obviates the need for clean target images but instead introduces the requirement for two noise realizations for each corrupt input. In this paper, we present a denoising technique for PET based on the Noise2Void paradigm, which requires only a single noisy image for training thus ensuring wider applicability and adoptability. During the training phase, a single noisy PET image serves as both the input and the target. The method was validated on simulation data based on the BrainWeb digital phantom. Our results show that it generates comparable performance at the training and validation stages for varying noise levels. Furthermore, its performance remains robust even when the validation inputs have different count levels than the training inputs.