利用光子发射数据评估基于深度学习的衰减校正用于专用头颈部PET扫描仪的18F-FDG图像的可行性。

IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Mahsa Shahrbabaki Mofrad, Ali Ghafari, Amin Amiri Tehranizadeh, Farahnaz Aghahosseini, Mohammad Reza Ay, Saeed Farzenefar, Peyman Sheikhzadeh
{"title":"利用光子发射数据评估基于深度学习的衰减校正用于专用头颈部PET扫描仪的18F-FDG图像的可行性。","authors":"Mahsa Shahrbabaki Mofrad, Ali Ghafari, Amin Amiri Tehranizadeh, Farahnaz Aghahosseini, Mohammad Reza Ay, Saeed Farzenefar, Peyman Sheikhzadeh","doi":"10.1088/2057-1976/ae08ba","DOIUrl":null,"url":null,"abstract":"<p><p>This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging. A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test. Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value > 0.05), but significant differences were found in pathological images (p-value < 0.05). The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing the feasibility of deep learning-based attenuation correction using photon emission data in<sup>18</sup>F-FDG images for dedicated head and neck PET scanners.\",\"authors\":\"Mahsa Shahrbabaki Mofrad, Ali Ghafari, Amin Amiri Tehranizadeh, Farahnaz Aghahosseini, Mohammad Reza Ay, Saeed Farzenefar, Peyman Sheikhzadeh\",\"doi\":\"10.1088/2057-1976/ae08ba\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging. A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test. Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value > 0.05), but significant differences were found in pathological images (p-value < 0.05). The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.</p>\",\"PeriodicalId\":8896,\"journal\":{\"name\":\"Biomedical Physics & Engineering Express\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Physics & Engineering Express\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2057-1976/ae08ba\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ae08ba","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

本研究旨在评估深度学习技术在从非衰减校正(NAC) F-FDG PET图像生成测量衰减校正(MAC)图像中的应用,重点是头颈部成像。材料和方法:残差网络(ResNet)用于训练114名患者(12,068片)无病理或伪影的2D头颈部PET图像。为了在训练和测试期间进行验证,使用了21和24张没有病理和伪影的患者图像,并使用了12张有病理的图像进行独立测试。使用RMSE、SSIM、PSNR和MSE等指标评估预测准确性。通过测量参考图像和预测图像的肿瘤/热区域的对比度和信噪比来评估未见病变对网络的影响。使用配对样本t检验评估参考图像和预测图像的对比度和信噪比之间的统计学意义。结果:两位核医学医师评估了预测的头颈部MAC图像,发现它们在视觉上与参考图像相似。正常试验组PSNR为44.02±1.77,SSIM为0.99±0.002,RMSE为0.007±0.0019,MSE为0.000053±0.000030。病理试验组分别为43.14±2.10、0.99±0.005、0.0078±0.0015、0.000063±0.000026。病理对照图与参考图在信噪比和对比度上无显著差异(p值0.05),病理对照图在信噪比和对比度上有显著差异(p值0.05)
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing the feasibility of deep learning-based attenuation correction using photon emission data in18F-FDG images for dedicated head and neck PET scanners.

This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging. A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test. Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value > 0.05), but significant differences were found in pathological images (p-value < 0.05). The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Physics & Engineering Express
Biomedical Physics & Engineering Express RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.80
自引率
0.00%
发文量
153
期刊介绍: BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信