基于锥束计算机断层扫描的渐进式自动分割在线自适应放射治疗

IF 3.4 Q2 ONCOLOGY
Hengrui Zhao, Xiao Liang, Boyu Meng, Michael Dohopolski, Byongsu Choi, Bin Cai, Mu-Han Lin, Ti Bai, Dan Nguyen, Steve Jiang
{"title":"基于锥束计算机断层扫描的渐进式自动分割在线自适应放射治疗","authors":"Hengrui Zhao,&nbsp;Xiao Liang,&nbsp;Boyu Meng,&nbsp;Michael Dohopolski,&nbsp;Byongsu Choi,&nbsp;Bin Cai,&nbsp;Mu-Han Lin,&nbsp;Ti Bai,&nbsp;Dan Nguyen,&nbsp;Steve Jiang","doi":"10.1016/j.phro.2024.100610","DOIUrl":null,"url":null,"abstract":"<div><h3>Background and purpose</h3><p>Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.</p></div><div><h3>Materials and methods</h3><p>We introduce a novel framework that incorporates data from a patient’s initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction’s CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.</p></div><div><h3>Results</h3><p>Our proposed model’s segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head &amp; Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.</p></div><div><h3>Conclusions</h3><p>Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.</p></div>","PeriodicalId":36850,"journal":{"name":"Physics and Imaging in Radiation Oncology","volume":null,"pages":null},"PeriodicalIF":3.4000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2405631624000800/pdfft?md5=f95835cfab39bce24fc884853673897d&pid=1-s2.0-S2405631624000800-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy\",\"authors\":\"Hengrui Zhao,&nbsp;Xiao Liang,&nbsp;Boyu Meng,&nbsp;Michael Dohopolski,&nbsp;Byongsu Choi,&nbsp;Bin Cai,&nbsp;Mu-Han Lin,&nbsp;Ti Bai,&nbsp;Dan Nguyen,&nbsp;Steve Jiang\",\"doi\":\"10.1016/j.phro.2024.100610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background and purpose</h3><p>Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.</p></div><div><h3>Materials and methods</h3><p>We introduce a novel framework that incorporates data from a patient’s initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction’s CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.</p></div><div><h3>Results</h3><p>Our proposed model’s segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head &amp; Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.</p></div><div><h3>Conclusions</h3><p>Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.</p></div>\",\"PeriodicalId\":36850,\"journal\":{\"name\":\"Physics and Imaging in Radiation Oncology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2405631624000800/pdfft?md5=f95835cfab39bce24fc884853673897d&pid=1-s2.0-S2405631624000800-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics and Imaging in Radiation Oncology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2405631624000800\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics and Imaging in Radiation Oncology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2405631624000800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景和目的靶点和危险器官(OAR)的精确自动分割对于在线自适应放射治疗(ART)的成功临床应用至关重要。目前的锥束计算机断层扫描(CBCT)自动分割方法面临挑战,导致分割结果往往无法达到临床可接受性。目前的 CBCT 自动分割方法忽略了来自初始计划和先前自适应分段的大量信息,而这些信息可以提高分割的精确度。材料与方法 我们介绍了一种新颖的框架,该框架结合了来自患者初始计划和先前自适应分段的数据,利用这些额外的时间背景来显著提高当前分段 CBCT 图像的分割精确度。我们提出的 LSTM-UNet 是一种创新架构,它将长短期记忆(LSTM)单元集成到传统 U-Net 框架的跳接连接中,以保留以前分数的信息。结果我们提出的模型对 8 个头部及颈部器官和目标的分割预测得出的平均 Dice 相似系数为 79%,而无先验知识的基线模型为 52%,有先验知识但无记忆的基线模型为 78%。结论我们提出的模型超越了基线分割框架,有效地利用了以前的分割信息,从而减少了临床医生修改自动分割结果的工作量。此外,它还能与提供更好先验知识的基于配准的方法配合使用。我们的模型有望集成到在线 ART 工作流程中,为合成 CT 图像提供精确的分割功能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy

Background and purpose

Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.

Materials and methods

We introduce a novel framework that incorporates data from a patient’s initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction’s CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.

Results

Our proposed model’s segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.

Conclusions

Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Physics and Imaging in Radiation Oncology
Physics and Imaging in Radiation Oncology Physics and Astronomy-Radiation
CiteScore
5.30
自引率
18.90%
发文量
93
审稿时长
6 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信