{"title":"基于策略梯度的深度强化学习对头颈癌质子PBS治疗方案的自动化优化","authors":"Qingqing Wang, Chang Chang","doi":"10.1002/mp.17654","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Proton pencil beam scanning (PBS) treatment planning for head and neck (H&N) cancers is a time-consuming and experience-demanding task where a large number of potentially conflicting planning objectives are involved. Deep reinforcement learning (DRL) has recently been introduced to the planning processes of intensity-modulated radiation therapy (IMRT) and brachytherapy for prostate, lung, and cervical cancers. However, existing DRL planning models are built upon the Q-learning framework and rely on weighted linear combinations of clinical metrics for reward calculation. These approaches suffer from poor scalability and flexibility, that is, they are only capable of adjusting a limited number of planning objectives in discrete action spaces and therefore fail to generalize to more complex planning problems.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>Here we propose an automatic treatment planning model using the proximal policy optimization (PPO) algorithm in the policy gradient framework of DRL and a dose distribution-based reward function for proton PBS treatment planning of H&N cancers.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The planning process is formulated as an optimization problem. A set of empirical rules is used to create auxiliary planning structures from target volumes and organs-at-risk (OARs), along with their associated planning objectives. Special attention is given to overlapping structures with potentially conflicting objectives. These planning objectives are fed into an in-house optimization engine to generate the spot monitor unit (MU) values. A decision-making policy network trained using PPO is developed to iteratively adjust the involved planning objective parameters. The policy network predicts actions in a continuous action space and guides the treatment planning system to refine the PBS treatment plans using a novel dose distribution-based reward function. A total of 34 H&N patients (30 for training and 4 for test) and 26 liver patients (20 for training, 6 for test) are included in this study to train and verify the effectiveness and generalizability of the proposed method.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Proton H&N treatment plans generated by the model show improved OAR sparing with equal or superior target coverage when compared with human-generated plans. Moreover, additional experiments on liver cancer demonstrate that the proposed method can be successfully generalized to other treatment sites.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The automatic treatment planning model can generate complex H&N plans with quality comparable or superior to those produced by experienced human planners. Compared with existing works, our method is capable of handling more planning objectives in continuous action spaces. To the best of our knowledge, this is the first DRL-based automatic treatment planning model capable of achieving human-level performance for H&N cancers.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 4","pages":"1997-2014"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automating the optimization of proton PBS treatment planning for head and neck cancers using policy gradient-based deep reinforcement learning\",\"authors\":\"Qingqing Wang, Chang Chang\",\"doi\":\"10.1002/mp.17654\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Proton pencil beam scanning (PBS) treatment planning for head and neck (H&N) cancers is a time-consuming and experience-demanding task where a large number of potentially conflicting planning objectives are involved. Deep reinforcement learning (DRL) has recently been introduced to the planning processes of intensity-modulated radiation therapy (IMRT) and brachytherapy for prostate, lung, and cervical cancers. However, existing DRL planning models are built upon the Q-learning framework and rely on weighted linear combinations of clinical metrics for reward calculation. These approaches suffer from poor scalability and flexibility, that is, they are only capable of adjusting a limited number of planning objectives in discrete action spaces and therefore fail to generalize to more complex planning problems.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>Here we propose an automatic treatment planning model using the proximal policy optimization (PPO) algorithm in the policy gradient framework of DRL and a dose distribution-based reward function for proton PBS treatment planning of H&N cancers.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>The planning process is formulated as an optimization problem. A set of empirical rules is used to create auxiliary planning structures from target volumes and organs-at-risk (OARs), along with their associated planning objectives. Special attention is given to overlapping structures with potentially conflicting objectives. These planning objectives are fed into an in-house optimization engine to generate the spot monitor unit (MU) values. A decision-making policy network trained using PPO is developed to iteratively adjust the involved planning objective parameters. The policy network predicts actions in a continuous action space and guides the treatment planning system to refine the PBS treatment plans using a novel dose distribution-based reward function. A total of 34 H&N patients (30 for training and 4 for test) and 26 liver patients (20 for training, 6 for test) are included in this study to train and verify the effectiveness and generalizability of the proposed method.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Proton H&N treatment plans generated by the model show improved OAR sparing with equal or superior target coverage when compared with human-generated plans. Moreover, additional experiments on liver cancer demonstrate that the proposed method can be successfully generalized to other treatment sites.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>The automatic treatment planning model can generate complex H&N plans with quality comparable or superior to those produced by experienced human planners. Compared with existing works, our method is capable of handling more planning objectives in continuous action spaces. To the best of our knowledge, this is the first DRL-based automatic treatment planning model capable of achieving human-level performance for H&N cancers.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 4\",\"pages\":\"1997-2014\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17654\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17654","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Automating the optimization of proton PBS treatment planning for head and neck cancers using policy gradient-based deep reinforcement learning
Background
Proton pencil beam scanning (PBS) treatment planning for head and neck (H&N) cancers is a time-consuming and experience-demanding task where a large number of potentially conflicting planning objectives are involved. Deep reinforcement learning (DRL) has recently been introduced to the planning processes of intensity-modulated radiation therapy (IMRT) and brachytherapy for prostate, lung, and cervical cancers. However, existing DRL planning models are built upon the Q-learning framework and rely on weighted linear combinations of clinical metrics for reward calculation. These approaches suffer from poor scalability and flexibility, that is, they are only capable of adjusting a limited number of planning objectives in discrete action spaces and therefore fail to generalize to more complex planning problems.
Purpose
Here we propose an automatic treatment planning model using the proximal policy optimization (PPO) algorithm in the policy gradient framework of DRL and a dose distribution-based reward function for proton PBS treatment planning of H&N cancers.
Methods
The planning process is formulated as an optimization problem. A set of empirical rules is used to create auxiliary planning structures from target volumes and organs-at-risk (OARs), along with their associated planning objectives. Special attention is given to overlapping structures with potentially conflicting objectives. These planning objectives are fed into an in-house optimization engine to generate the spot monitor unit (MU) values. A decision-making policy network trained using PPO is developed to iteratively adjust the involved planning objective parameters. The policy network predicts actions in a continuous action space and guides the treatment planning system to refine the PBS treatment plans using a novel dose distribution-based reward function. A total of 34 H&N patients (30 for training and 4 for test) and 26 liver patients (20 for training, 6 for test) are included in this study to train and verify the effectiveness and generalizability of the proposed method.
Results
Proton H&N treatment plans generated by the model show improved OAR sparing with equal or superior target coverage when compared with human-generated plans. Moreover, additional experiments on liver cancer demonstrate that the proposed method can be successfully generalized to other treatment sites.
Conclusions
The automatic treatment planning model can generate complex H&N plans with quality comparable or superior to those produced by experienced human planners. Compared with existing works, our method is capable of handling more planning objectives in continuous action spaces. To the best of our knowledge, this is the first DRL-based automatic treatment planning model capable of achieving human-level performance for H&N cancers.
期刊介绍:
Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments
Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.