从双平面x线片生成合成ct样脊柱成像:不同深度学习架构的比较。

IF 3 2区 医学 Q2 CLINICAL NEUROLOGY
Massimo Bottini, Olivier Zanier, Raffaele Da Mutten, Maria L Gandia-Gonzalez, Erik Edström, Adrian Elmi-Terander, Luca Regli, Carlo Serra, Victor E Staartjes
{"title":"从双平面x线片生成合成ct样脊柱成像:不同深度学习架构的比较。","authors":"Massimo Bottini, Olivier Zanier, Raffaele Da Mutten, Maria L Gandia-Gonzalez, Erik Edström, Adrian Elmi-Terander, Luca Regli, Carlo Serra, Victor E Staartjes","doi":"10.3171/2025.4.FOCUS25170","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique.</p><p><strong>Methods: </strong>A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity.</p><p><strong>Results: </strong>The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images.</p><p><strong>Conclusions: </strong>This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E13"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.\",\"authors\":\"Massimo Bottini, Olivier Zanier, Raffaele Da Mutten, Maria L Gandia-Gonzalez, Erik Edström, Adrian Elmi-Terander, Luca Regli, Carlo Serra, Victor E Staartjes\",\"doi\":\"10.3171/2025.4.FOCUS25170\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique.</p><p><strong>Methods: </strong>A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity.</p><p><strong>Results: </strong>The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images.</p><p><strong>Conclusions: </strong>This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.</p>\",\"PeriodicalId\":19187,\"journal\":{\"name\":\"Neurosurgical focus\",\"volume\":\"59 1\",\"pages\":\"E13\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurosurgical focus\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3171/2025.4.FOCUS25170\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurosurgical focus","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3171/2025.4.FOCUS25170","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究比较了两种深度学习架构——生成对抗网络(gan)和卷积神经网络结合隐式神经表征(CNN-INRs)——用于从双平面x线片生成脊柱的合成CT (sCT)图像。该研究的目的是为这种潜在的术中成像技术确定最稳健和临床可行的方法。方法:采用216例训练病例和54例验证病例的脊柱CT数据集。数字重建x线照片(DRRs)作为二维输入,在相同条件下训练两个模型170次。评估指标包括结构相似指数测量(SSIM)、峰值信噪比(PSNR)和余弦相似度(CS),并辅以解剖保真度的定性评估。结果:GAN模型的平均SSIM为0.932±0.015,PSNR为19.85±1.40 dB, CS为0.671±0.177。CNN-INR模型的平均SSIM为0.921±0.015,PSNR为21.96±1.20 dB, CS为0.707±0.114。统计分析显示,SSIM和PSNR差异有统计学意义(p = 0.001),而CS差异无统计学意义(p = 0.667)。定性评价一致倾向于GAN模型,它产生了更多的解剖细节和视觉逼真的sCT图像。结论:本研究证明了利用GAN和CNN-INR模型从双平面x线片生成脊柱sCT图像的可行性。虽然两个模型都没有达到临床级输出,但GAN架构显示出更大的潜力,可以生成解剖学上准确和视觉上逼真的图像。这些发现强调了从双平面x线片生成sCT图像作为一种减少辐射暴露和提高成像可及性的创新方法的前景,gan正在成为进一步研究和临床整合的更有前途的途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.

Objective: This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique.

Methods: A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity.

Results: The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images.

Conclusions: This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurosurgical focus
Neurosurgical focus CLINICAL NEUROLOGY-SURGERY
CiteScore
6.30
自引率
0.00%
发文量
261
审稿时长
3 months
期刊介绍: Information not localized
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信