Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2024-08-21 DOI:10.1002/mp.17361
Byong Su Choi, Chris J. Beltran, Sven Olberg, Xiaoying Liang, Bo Lu, Jun Tan, Alessio Parisi, Janet Denbeigh, Sridhar Yaddanapudi, Jin Sung Kim, Keith M. Furutani, Justin C. Park, Bongyong Song
{"title":"Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL","authors":"Byong Su Choi,&nbsp;Chris J. Beltran,&nbsp;Sven Olberg,&nbsp;Xiaoying Liang,&nbsp;Bo Lu,&nbsp;Jun Tan,&nbsp;Alessio Parisi,&nbsp;Janet Denbeigh,&nbsp;Sridhar Yaddanapudi,&nbsp;Jin Sung Kim,&nbsp;Keith M. Furutani,&nbsp;Justin C. Park,&nbsp;Bongyong Song","doi":"10.1002/mp.17361","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (<i>n</i> = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (<i>m</i> &lt; <i>n</i>) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81<span></span><math>\n <semantics>\n <mo>±</mo>\n <annotation>$ \\pm $</annotation>\n </semantics></math>0.05 with the general model, 0.83<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.04</mn>\n </mrow>\n <annotation>$ \\pm 0.04$</annotation>\n </semantics></math> for the continual model, 0.83<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.04</mn>\n </mrow>\n <annotation>$ \\pm 0.04$</annotation>\n </semantics></math> for the conventional IDOL model to an average of 0.87<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.03</mn>\n </mrow>\n <annotation>$ \\pm 0.03$</annotation>\n </semantics></math> with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.99</mn>\n </mrow>\n <annotation>$ \\pm 0.99$</annotation>\n </semantics></math> with the general model, 2.84<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.69</mn>\n </mrow>\n <annotation>$ \\pm 0.69$</annotation>\n </semantics></math> for the continual model, 2.79<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.79</mn>\n </mrow>\n <annotation>$ \\pm 0.79$</annotation>\n </semantics></math> for the conventional IDOL model and 2.36<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.52</mn>\n </mrow>\n <annotation>$ \\pm 0.52$</annotation>\n </semantics></math> for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8568-8583"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17361","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.

Purpose

To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.

Methods

The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.

Results

Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81 ± $ \pm $ 0.05 with the general model, 0.83 ± 0.04 $ \pm 0.04$ for the continual model, 0.83 ± 0.04 $ \pm 0.04$ for the conventional IDOL model to an average of 0.87 ± 0.03 $ \pm 0.03$ with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06 ± 0.99 $ \pm 0.99$ with the general model, 2.84 ± 0.69 $ \pm 0.69$ for the continual model, 2.79 ± 0.79 $ \pm 0.79$ for the conventional IDOL model and 2.36 ± 0.52 $ \pm 0.52$ for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.

Conclusion

The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.

利用个性化超空间学习 IDOL 增强 IDOL 分割框架。
背景:自适应放疗(ART)工作流程已被越来越多地采用,以在不断变化的解剖条件下实现剂量升级和组织疏通,但重新构图的必要性和相关的时间负担阻碍了实时或在线 ART 工作流程的实现。为了应对这一挑战,人们开发了涉及可变形图像配准、基于图集的分割和基于深度学习的分割(DLS)的自动分割方法。目的:为了应对这一挑战,我们针对自动分割任务开发了一种有意深度过拟合学习(IDOL)框架。然而,我们也发现了一些局限性,尤其是个性化数据集不足以有效过拟合模型。在本研究中,我们引入了个性化超空间学习(PHL)-IDOL分割框架,该框架能够生成数据集,诱导模型过度拟合特定患者特征,从而进行医学影像分割:PHL-IDOL 模型分两个阶段进行训练。方法:PHL-IDOL 模型分两个阶段进行训练。第一阶段,使用由 CT 图像和临床轮廓组成的各种患者数据集(n = 100 名患者)训练传统的通用模型。之后,利用由两部分组成的数据集对通用模型进行调整:(a) 从患者数据中选择一个子集(m 结果); (b) 从患者数据中选择一个子集(m 结果):实施 PHL-IDOL 框架后,每个患者的分割性能都有所提高。Dice 分数从一般模型的 0.81 ± $ \pm $ 0.05、持续模型的 0.83 ± 0.04 $ \pm 0.04$、传统 IDOL 模型的 0.83 ± 0.04 $ \pm 0.04$提高到 PHL-IDOL 模型的平均 0.87 ± 0.03 $ \pm 0.03$。同样,豪斯多夫距离也从一般模型的 3.06 ± 0.99 美元(\pm 0.99 美元)、连续模型的 2.84 ± 0.69 美元(\pm 0.69 美元)、传统 IDOL 模型的 2.79 ± 0.79 美元(\pm 0.79 美元)和 PHL-IDOL 模型的 2.36 ± 0.52 美元(\pm 0.52 美元)下降到 PHL-IDOL 模型的 2.36 ± 0.52 美元(\pm 0.52 美元)。与一般模型和 PHL-IDOL 模型相比,所有标准偏差都减少了近一半:结论:PHL-IDOL 框架应用于自动分割任务,与一般的 DLS 方法相比性能有所提高,这表明在在线 ART 工作流程的核心任务中利用患者特定先验信息大有可为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信