[基于生成投影插值的双域锥形束计算机断层扫描稀疏视图重建方法]。

Q3 Medicine
J Liao, S Peng, Y Wang, Z Bian
{"title":"[基于生成投影插值的双域锥形束计算机断层扫描稀疏视图重建方法]。","authors":"J Liao, S Peng, Y Wang, Z Bian","doi":"10.12122/j.issn.1673-4254.2024.10.23","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To propose a dual-domain CBCT reconstruction framework (DualSFR-Net) based on generative projection interpolation to reduce artifacts in sparse-view cone beam computed tomography (CBCT) reconstruction.</p><p><strong>Methods: </strong>The proposed method DualSFR-Net consists of a generative projection interpolation module, a domain transformation module, and an image restoration module. The generative projection interpolation module includes a sparse projection interpolation network (SPINet) based on a generative adversarial network and a full-view projection restoration network (FPRNet). SPINet performs projection interpolation to synthesize full-view projection data from the sparse-view projection data, while FPRNet further restores the synthesized full-view projection data. The domain transformation module introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes. The image restoration module includes an image restoration network FIRNet that fine-tunes the domain-transformed images to eliminate residual artifacts and noise.</p><p><strong>Results: </strong>Validation experiments conducted on a dental CT dataset demonstrated that DualSFR-Net was capable to reconstruct high-quality CBCT images under sparse-view sampling protocols. Quantitatively, compared to the current best methods, the DualSFR-Net method improved the PSNR by 0.6615 and 0.7658 and increased the SSIM by 0.0053 and 0.0134 under 2-fold and 4-fold sparse protocols, respectively.</p><p><strong>Conclusion: </strong>The proposed generative projection interpolation-based dual-domain CBCT sparse-view reconstruction method can effectively reduce stripe artifacts to improve image quality and enables efficient joint training for dual-domain imaging networks in sparse-view CBCT reconstruction.</p>","PeriodicalId":18962,"journal":{"name":"南方医科大学学报杂志","volume":"44 10","pages":"2044-2054"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526453/pdf/","citationCount":"0","resultStr":"{\"title\":\"[A dual-domain cone beam computed tomography sparse-view reconstruction method based on generative projection interpolation].\",\"authors\":\"J Liao, S Peng, Y Wang, Z Bian\",\"doi\":\"10.12122/j.issn.1673-4254.2024.10.23\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To propose a dual-domain CBCT reconstruction framework (DualSFR-Net) based on generative projection interpolation to reduce artifacts in sparse-view cone beam computed tomography (CBCT) reconstruction.</p><p><strong>Methods: </strong>The proposed method DualSFR-Net consists of a generative projection interpolation module, a domain transformation module, and an image restoration module. The generative projection interpolation module includes a sparse projection interpolation network (SPINet) based on a generative adversarial network and a full-view projection restoration network (FPRNet). SPINet performs projection interpolation to synthesize full-view projection data from the sparse-view projection data, while FPRNet further restores the synthesized full-view projection data. The domain transformation module introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes. The image restoration module includes an image restoration network FIRNet that fine-tunes the domain-transformed images to eliminate residual artifacts and noise.</p><p><strong>Results: </strong>Validation experiments conducted on a dental CT dataset demonstrated that DualSFR-Net was capable to reconstruct high-quality CBCT images under sparse-view sampling protocols. Quantitatively, compared to the current best methods, the DualSFR-Net method improved the PSNR by 0.6615 and 0.7658 and increased the SSIM by 0.0053 and 0.0134 under 2-fold and 4-fold sparse protocols, respectively.</p><p><strong>Conclusion: </strong>The proposed generative projection interpolation-based dual-domain CBCT sparse-view reconstruction method can effectively reduce stripe artifacts to improve image quality and enables efficient joint training for dual-domain imaging networks in sparse-view CBCT reconstruction.</p>\",\"PeriodicalId\":18962,\"journal\":{\"name\":\"南方医科大学学报杂志\",\"volume\":\"44 10\",\"pages\":\"2044-2054\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526453/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"南方医科大学学报杂志\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12122/j.issn.1673-4254.2024.10.23\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"南方医科大学学报杂志","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12122/j.issn.1673-4254.2024.10.23","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

目的:提出一种基于生成投影插值的双域 CBCT 重建框架(DualSFR-Net),以减少稀疏视锥束计算机断层扫描(CBCT)重建中的伪影:所提出的 DualSFR-Net 方法由生成投影插值模块、域变换模块和图像复原模块组成。生成投影插值模块包括一个基于生成对抗网络的稀疏投影插值网络(SPINet)和一个全视角投影复原网络(FPRNet)。SPINet 执行投影插值,从稀疏视图投影数据合成全视图投影数据,而 FPRNet 则进一步还原合成的全视图投影数据。域变换模块引入了 FDK 重建和正向投影算子,以完成正向和梯度反向传播过程。图像复原模块包括一个图像复原网络 FIRNet,该网络对域变换后的图像进行微调,以消除残留的伪影和噪声:结果:在牙科 CT 数据集上进行的验证实验表明,DualSFR-Net 能够在稀疏视图采样协议下重建高质量的 CBCT 图像。从数量上看,与目前最好的方法相比,DualSFR-Net 方法在 2 倍和 4 倍稀疏协议下的 PSNR 分别提高了 0.6615 和 0.7658,SSIM 分别提高了 0.0053 和 0.0134:结论:所提出的基于生成投影插值的双域 CBCT 稀疏视图重建方法能有效减少条纹伪影,提高图像质量,并能在稀疏视图 CBCT 重建中实现双域成像网络的高效联合训练。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
[A dual-domain cone beam computed tomography sparse-view reconstruction method based on generative projection interpolation].

Objective: To propose a dual-domain CBCT reconstruction framework (DualSFR-Net) based on generative projection interpolation to reduce artifacts in sparse-view cone beam computed tomography (CBCT) reconstruction.

Methods: The proposed method DualSFR-Net consists of a generative projection interpolation module, a domain transformation module, and an image restoration module. The generative projection interpolation module includes a sparse projection interpolation network (SPINet) based on a generative adversarial network and a full-view projection restoration network (FPRNet). SPINet performs projection interpolation to synthesize full-view projection data from the sparse-view projection data, while FPRNet further restores the synthesized full-view projection data. The domain transformation module introduces the FDK reconstruction and forward projection operators to complete the forward and gradient backpropagation processes. The image restoration module includes an image restoration network FIRNet that fine-tunes the domain-transformed images to eliminate residual artifacts and noise.

Results: Validation experiments conducted on a dental CT dataset demonstrated that DualSFR-Net was capable to reconstruct high-quality CBCT images under sparse-view sampling protocols. Quantitatively, compared to the current best methods, the DualSFR-Net method improved the PSNR by 0.6615 and 0.7658 and increased the SSIM by 0.0053 and 0.0134 under 2-fold and 4-fold sparse protocols, respectively.

Conclusion: The proposed generative projection interpolation-based dual-domain CBCT sparse-view reconstruction method can effectively reduce stripe artifacts to improve image quality and enables efficient joint training for dual-domain imaging networks in sparse-view CBCT reconstruction.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
南方医科大学学报杂志
南方医科大学学报杂志 Medicine-Medicine (all)
CiteScore
1.50
自引率
0.00%
发文量
208
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信