利用深度强化学习对头颈部(HN)癌症进行强度调制放射治疗(IMRT)的自动治疗规划。

IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng
{"title":"利用深度强化学习对头颈部(HN)癌症进行强度调制放射治疗(IMRT)的自动治疗规划。","authors":"Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng","doi":"10.1088/1361-6560/ad965d","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>&#xD;To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance.&#xD;Methods:&#xD;With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients. &#xD;Results:&#xD;The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning.&#xD;Conclusion:&#xD;The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality. &#xD.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated treatment planning with deep reinforcement learning for head-and-neck (HN) cancer intensity modulated radiation therapy (IMRT).\",\"authors\":\"Dongrong Yang, Xin Wu, Xinyi Li, Ryan Mansfield, Yibo Xie, Qiuwen Wu, Q Jackie Wu, Yang Sheng\",\"doi\":\"10.1088/1361-6560/ad965d\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>&#xD;To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance.&#xD;Methods:&#xD;With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients. &#xD;Results:&#xD;The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning.&#xD;Conclusion:&#xD;The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality. &#xD.</p>\",\"PeriodicalId\":20185,\"journal\":{\"name\":\"Physics in medicine and biology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics in medicine and biology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1088/1361-6560/ad965d\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/ad965d","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目的: 开发一种深度强化学习(DRL)代理,使其能够与治疗计划系统(TPS)进行自我交互,从而为头颈部(HN)癌症自动生成强度调制放射治疗(IMRT)治疗计划,并具有一致的风险器官(OAR)疏通性能。对 20 名患者进行了 DRL 代理训练。在每次逆优化过程中,收集中间剂量学终点值、剂量体积约束值和结构目标函数损失作为 DRL 状态。通过调整作为行动的目标约束条件,代理学会了通过平衡 OAR sparing 和计划目标容积(PTV)覆盖率来寻求最佳奖励。在模型训练过程中,根据当前剂量-体积-历史图(DVH)终点和临床目标计算出的奖励被发送回代理,以更新行动策略。结果: DRL 代理能够在 12.4±3.1 分钟内生成临床上可接受的 IMRT 计划,无需人工干预。与临床计划(112.4%)相比,DRL 计划显示出更低的 PTV 最大剂量(109.2%)(p
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated treatment planning with deep reinforcement learning for head-and-neck (HN) cancer intensity modulated radiation therapy (IMRT).

Purpose: To develop a deep reinforcement learning (DRL) agent to self-interact with the treatment planning system (TPS) to automatically generate intensity modulated radiation therapy (IMRT) treatment plans for head-and-neck (HN) cancer with consistent organ-at-risk (OAR) sparing performance. Methods: With IRB approval, one hundred and twenty HN patients receiving IMRT were included. The DRL agent was trained with 20 patients. During each inverse optimization process, the intermediate dosimetric endpoints' value, dose volume constraints value and structure objective function loss were collected as the DRL states. By adjusting the objective constraints as actions, the agent learned to seek optimal rewards by balancing OAR sparing and planning target volume (PTV) coverage. Reward computed from current dose-volume-histogram (DVH) endpoints and clinical objectives were sent back to the agent to update action policy during model training. The trained agent was evaluated with the rest 100 patients. Results: The DRL agent was able to generate a clinically acceptable IMRT plan within 12.4±3.1 minutes without human intervention. DRL plans showed lower PTV maximum dose (109.2%) compared to clinical plans (112.4%) (p<.05). Average median dose of left parotid, right parotid, oral cavity, larynx, pharynx of DRL plans were 15.6Gy, 12.2Gy, 25.7Gy, 27.3Gy and 32.1Gy respectively, comparable to 17.1 Gy,15.7Gy, 24.4Gy, 23.7Gy and 35.5Gy of corresponding clinical plans. The maximum dose of cord+5mm, brainstem and mandible were also comparable between the two groups. In addition, DRL plans demonstrated reduced variability, as evidenced by smaller 95% confidence intervals. The total MU of the DRL plans was 1611 vs 1870 (p<.05) of clinical plans. The results signaled the DRL's consistent planning strategy compared to the planners' occasional back-and-forth decision-making during planning. Conclusion: The proposed deep reinforcement learning (DRL) agent is capable of efficiently generating HN IMRT plans with consistent quality. .

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Physics in medicine and biology
Physics in medicine and biology 医学-工程:生物医学
CiteScore
6.50
自引率
14.30%
发文量
409
审稿时长
2 months
期刊介绍: The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信