A unified generation-registration framework for improved MR-based CT synthesis in proton therapy

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2024-08-13 DOI:10.1002/mp.17338
Xia Li, Renato Bellotti, Barbara Bachtiary, Jan Hrbacek, Damien C. Weber, Antony J. Lomax, Joachim M. Buhmann, Ye Zhang
{"title":"A unified generation-registration framework for improved MR-based CT synthesis in proton therapy","authors":"Xia Li,&nbsp;Renato Bellotti,&nbsp;Barbara Bachtiary,&nbsp;Jan Hrbacek,&nbsp;Damien C. Weber,&nbsp;Antony J. Lomax,&nbsp;Joachim M. Buhmann,&nbsp;Ye Zhang","doi":"10.1002/mp.17338","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Compared to the baseline Pix2Pix method with MAE 124.95<span></span><math>\n <semantics>\n <mo>±</mo>\n <annotation>$\\pm$</annotation>\n </semantics></math>30.74 HU, the proposed technique demonstrated 80.98<span></span><math>\n <semantics>\n <mo>±</mo>\n <annotation>$\\pm$</annotation>\n </semantics></math>7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8302-8316"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17338","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17338","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background

The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning.

Purpose

This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images.

Methods

The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing.

Results

Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans.

Conclusions

This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.

Abstract Image

用于改进质子治疗中基于 MR 的 CT 合成的统一生成-注册框架。
背景:利用磁共振(MR)成像制定质子治疗计划是一种非常有效的指导方法,正日益受到人们的关注。这种方法的核心是通过磁共振扫描生成计算机断层扫描(CT)图像。然而,这一过程中的关键问题是准确对准 MR 和 CT 图像,这项任务对于经常移动的身体部位(如头颈部)尤其具有挑战性。目的:本研究介绍了一种新型网络,它能将图像生成和配准过程有机地结合在一起,以提高从更好配准的 MR 图像中提取的 sCT 的质量和解剖学保真度:方法:该方法将生成网络(G)与可变形配准网络(R)协同起来,在 MR 到 CT 合成过程中对它们进行联合优化。这一目标是通过交替最小化生成/配准 CT 图像与相应参考 CT 图像之间的差异来实现的。生成网络采用 UNet 架构,而配准网络则利用位移矢量场 (DVF) 的隐式神经表示 (INR)。我们在由 60 名头颈部患者组成的数据集上验证了这一方法,并保留了 12 个病例作为保留测试:与 MAE 为 124.95 ± $\pm$ 30.74 HU 的基准 Pix2Pix 方法相比,所提出的技术的 MAE 为 80.98 ± $\pm$ 7.55 HU。统一的平移-配准网络产生了更清晰、更符合解剖学的输出结果,显示了将磁共振图像转换为 sCT 的卓越功效。此外,从剂量学角度来看,根据生成的 sCT 重新计算的计划与参考质子计划的差异显著减少:这项研究最终证明,基于 MR 的 CT 合成方法整合了图像到图像的平移和可变形配准,能显著提高 sCT 生成的精度和质量,特别是对于相应 MR 和 CT 之间存在不同解剖变化的具有挑战性的身体部位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信