Evi M.C. Huijben , Maarten L. Terpstra , Arthur Jr. Galapon , Suraj Pai , Adrian Thummerer , Peter Koopmans , Manya Afonso , Maureen van Eijnatten , Oliver Gurney-Champion , Zeli Chen , Yiwen Zhang , Kaiyi Zheng , Chuanpu Li , Haowen Pang , Chuyang Ye , Runqi Wang , Tao Song , Fuxin Fan , Jingna Qiu , Yixing Huang , Matteo Maspero
{"title":"Generating synthetic computed tomography for radiotherapy: SynthRAD2023 challenge report","authors":"Evi M.C. Huijben , Maarten L. Terpstra , Arthur Jr. Galapon , Suraj Pai , Adrian Thummerer , Peter Koopmans , Manya Afonso , Maureen van Eijnatten , Oliver Gurney-Champion , Zeli Chen , Yiwen Zhang , Kaiyi Zheng , Chuanpu Li , Haowen Pang , Chuyang Ye , Runqi Wang , Tao Song , Fuxin Fan , Jingna Qiu , Yixing Huang , Matteo Maspero","doi":"10.1016/j.media.2024.103276","DOIUrl":null,"url":null,"abstract":"<div><p>Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning.</p><p>Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans.</p><p>The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (<span><math><mrow><mo>≥</mo><mn>0</mn><mo>.</mo><mn>87</mn><mo>/</mo><mn>0</mn><mo>.</mo><mn>90</mn></mrow></math></span>) and gamma pass rates for photon (<span><math><mrow><mo>≥</mo><mn>98</mn><mo>.</mo><mn>1</mn><mtext>%</mtext><mo>/</mo><mn>99</mn><mo>.</mo><mn>0</mn><mtext>%</mtext></mrow></math></span>) and proton (<span><math><mrow><mo>≥</mo><mn>97</mn><mo>.</mo><mn>3</mn><mtext>%</mtext><mo>/</mo><mn>97</mn><mo>.</mo><mn>0</mn><mtext>%</mtext></mrow></math></span>) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT.</p><p>SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002019/pdfft?md5=ac2a85a25668e619ece44f14ec077f0d&pid=1-s2.0-S1361841524002019-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841524002019","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning.
Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans.
The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices () and gamma pass rates for photon () and proton () plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT.
SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.