使用Pix2pix生成对抗网络增强伽玛刀锥束计算机断层扫描图像质量:一种深度学习方法。

IF 0.7 Q4 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Journal of Medical Physics Pub Date : 2025-01-01 Epub Date: 2025-03-24 DOI:10.4103/jmp.jmp_140_24
Prabhakar Ramachandran, Darcie Anderson, Zachery Colbert, Daniel Arrington, Michael Huo, Mark B Pinkham, Matthew Foote, Andrew Fielding
{"title":"使用Pix2pix生成对抗网络增强伽玛刀锥束计算机断层扫描图像质量:一种深度学习方法。","authors":"Prabhakar Ramachandran, Darcie Anderson, Zachery Colbert, Daniel Arrington, Michael Huo, Mark B Pinkham, Matthew Foote, Andrew Fielding","doi":"10.4103/jmp.jmp_140_24","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images.</p><p><strong>Materials and methods: </strong>We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient.</p><p><strong>Results: </strong>The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (<i>p</i> < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (<i>p</i> < 0.0001).</p><p><strong>Conclusion: </strong>The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.</p>","PeriodicalId":51719,"journal":{"name":"Journal of Medical Physics","volume":"50 1","pages":"30-37"},"PeriodicalIF":0.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12005652/pdf/","citationCount":"0","resultStr":"{\"title\":\"Enhancing Gamma Knife Cone-beam Computed Tomography Image Quality Using Pix2pix Generative Adversarial Networks: A Deep Learning Approach.\",\"authors\":\"Prabhakar Ramachandran, Darcie Anderson, Zachery Colbert, Daniel Arrington, Michael Huo, Mark B Pinkham, Matthew Foote, Andrew Fielding\",\"doi\":\"10.4103/jmp.jmp_140_24\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Aims: </strong>The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images.</p><p><strong>Materials and methods: </strong>We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient.</p><p><strong>Results: </strong>The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (<i>p</i> < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (<i>p</i> < 0.0001).</p><p><strong>Conclusion: </strong>The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.</p>\",\"PeriodicalId\":51719,\"journal\":{\"name\":\"Journal of Medical Physics\",\"volume\":\"50 1\",\"pages\":\"30-37\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12005652/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Physics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4103/jmp.jmp_140_24\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/3/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/jmp.jmp_140_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/24 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目的:开发改进的Pix2Pix卷积神经网络框架,以提高锥束计算机断层扫描(CBCT)图像的质量。它还试图减少Hounsfield单位(HU)的变化,使CBCT图像与计算机断层扫描(CT)图像中描绘的内部解剖结构非常相似。材料和方法:我们使用来自50名接受伽玛刀治疗的患者的数据集来开发一种深度学习模型,该模型将CBCT图像转换为高质量的合成CT (sCT)图像。使用Pix2Pix模型在7484个512 × 512像素的切片上使用40例患者的配对CBCT和ground truth CT图像进行训练,10例用于测试。使用图像质量评估指标对sCT图像进行评估,包括结构相似指数(SSIM)、平均绝对误差(MAE)、均方根误差(RMSE)、峰值信噪比(PSNR)、归一化相互关系和dice相似系数。结果:与CBCT相比,sCT图像质量有明显改善,SSIM从0.85±0.05增加到0.95±0.03,MAE从77.37±20.05下降到18.81±7.22 (p < 0.0001)。PSNR和RMSE分别由26.50±1.72提高到30.76±2.23和228.52±53.76提高到82.30±23.81 (p < 0.0001)。结论:sCT图像显示噪声和伪影减少,在HU值上与CT接近,与CT图像具有高度的相似性,突出了深度学习在显著提高CBCT图像质量方面的潜力,可用于放射外科应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing Gamma Knife Cone-beam Computed Tomography Image Quality Using Pix2pix Generative Adversarial Networks: A Deep Learning Approach.

Aims: The study aims to develop a modified Pix2Pix convolutional neural network framework to enhance the quality of cone-beam computed tomography (CBCT) images. It also seeks to reduce the Hounsfield unit (HU) variations, making CBCT images closely resemble the internal anatomy as depicted in computed tomography (CT) images.

Materials and methods: We used datasets from 50 patients who underwent Gamma Knife treatment to develop a deep learning model that translates CBCT images into high-quality synthetic CT (sCT) images. Paired CBCT and ground truth CT images from 40 patients were used for training and 10 for testing on 7484 slices of 512 × 512 pixels with the Pix2Pix model. The sCT images were evaluated against ground truth CT scans using image quality assessment metrics, including the structural similarity index (SSIM), mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), normalized cross-correlation, and dice similarity coefficient.

Results: The results demonstrate significant improvements in image quality when comparing sCT images to CBCT, with SSIM increasing from 0.85 ± 0.05 to 0.95 ± 0.03 and MAE dropping from 77.37 ± 20.05 to 18.81 ± 7.22 (p < 0.0001 for both). PSNR and RMSE also improved, from 26.50 ± 1.72 to 30.76 ± 2.23 and 228.52 ± 53.76 to 82.30 ± 23.81, respectively (p < 0.0001).

Conclusion: The sCT images show reduced noise and artifacts, closely matching CT in HU values, and demonstrate a high degree of similarity to CT images, highlighting the potential of deep learning to significantly improve CBCT image quality for radiosurgery applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Medical Physics
Journal of Medical Physics RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
1.10
自引率
11.10%
发文量
55
审稿时长
30 weeks
期刊介绍: JOURNAL OF MEDICAL PHYSICS is the official journal of Association of Medical Physicists of India (AMPI). The association has been bringing out a quarterly publication since 1976. Till the end of 1993, it was known as Medical Physics Bulletin, which then became Journal of Medical Physics. The main objective of the Journal is to serve as a vehicle of communication to highlight all aspects of the practice of medical radiation physics. The areas covered include all aspects of the application of radiation physics to biological sciences, radiotherapy, radiodiagnosis, nuclear medicine, dosimetry and radiation protection. Papers / manuscripts dealing with the aspects of physics related to cancer therapy / radiobiology also fall within the scope of the journal.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信