Byong Su Choi, Chris J. Beltran, Sven Olberg, Xiaoying Liang, Bo Lu, Jun Tan, Alessio Parisi, Janet Denbeigh, Sridhar Yaddanapudi, Jin Sung Kim, Keith M. Furutani, Justin C. Park, Bongyong Song
{"title":"利用个性化超空间学习 IDOL 增强 IDOL 分割框架。","authors":"Byong Su Choi, Chris J. Beltran, Sven Olberg, Xiaoying Liang, Bo Lu, Jun Tan, Alessio Parisi, Janet Denbeigh, Sridhar Yaddanapudi, Jin Sung Kim, Keith M. Furutani, Justin C. Park, Bongyong Song","doi":"10.1002/mp.17361","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (<i>n</i> = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (<i>m</i> < <i>n</i>) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81<span></span><math>\n <semantics>\n <mo>±</mo>\n <annotation>$ \\pm $</annotation>\n </semantics></math>0.05 with the general model, 0.83<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.04</mn>\n </mrow>\n <annotation>$ \\pm 0.04$</annotation>\n </semantics></math> for the continual model, 0.83<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.04</mn>\n </mrow>\n <annotation>$ \\pm 0.04$</annotation>\n </semantics></math> for the conventional IDOL model to an average of 0.87<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.03</mn>\n </mrow>\n <annotation>$ \\pm 0.03$</annotation>\n </semantics></math> with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.99</mn>\n </mrow>\n <annotation>$ \\pm 0.99$</annotation>\n </semantics></math> with the general model, 2.84<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.69</mn>\n </mrow>\n <annotation>$ \\pm 0.69$</annotation>\n </semantics></math> for the continual model, 2.79<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.79</mn>\n </mrow>\n <annotation>$ \\pm 0.79$</annotation>\n </semantics></math> for the conventional IDOL model and 2.36<span></span><math>\n <semantics>\n <mrow>\n <mo>±</mo>\n <mn>0.52</mn>\n </mrow>\n <annotation>$ \\pm 0.52$</annotation>\n </semantics></math> for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8568-8583"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL\",\"authors\":\"Byong Su Choi, Chris J. Beltran, Sven Olberg, Xiaoying Liang, Bo Lu, Jun Tan, Alessio Parisi, Janet Denbeigh, Sridhar Yaddanapudi, Jin Sung Kim, Keith M. Furutani, Justin C. Park, Bongyong Song\",\"doi\":\"10.1002/mp.17361\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (<i>n</i> = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (<i>m</i> < <i>n</i>) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81<span></span><math>\\n <semantics>\\n <mo>±</mo>\\n <annotation>$ \\\\pm $</annotation>\\n </semantics></math>0.05 with the general model, 0.83<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.04</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.04$</annotation>\\n </semantics></math> for the continual model, 0.83<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.04</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.04$</annotation>\\n </semantics></math> for the conventional IDOL model to an average of 0.87<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.03</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.03$</annotation>\\n </semantics></math> with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.99</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.99$</annotation>\\n </semantics></math> with the general model, 2.84<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.69</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.69$</annotation>\\n </semantics></math> for the continual model, 2.79<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.79</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.79$</annotation>\\n </semantics></math> for the conventional IDOL model and 2.36<span></span><math>\\n <semantics>\\n <mrow>\\n <mo>±</mo>\\n <mn>0.52</mn>\\n </mrow>\\n <annotation>$ \\\\pm 0.52$</annotation>\\n </semantics></math> for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"51 11\",\"pages\":\"8568-8583\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17361\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17361","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL
Background
Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.
Purpose
To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.
Methods
The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.
Results
Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.810.05 with the general model, 0.83 for the continual model, 0.83 for the conventional IDOL model to an average of 0.87 with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06 with the general model, 2.84 for the continual model, 2.79 for the conventional IDOL model and 2.36 for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.
Conclusion
The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.
期刊介绍:
Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments
Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.