利用马尔可夫决策过程模型实现癌症筛查效益最大化:系统回顾

Naser Mohamadkhani, Mohammad Hadian
{"title":"利用马尔可夫决策过程模型实现癌症筛查效益最大化:系统回顾","authors":"Naser Mohamadkhani, Mohammad Hadian","doi":"10.5812/jjcdc-141686","DOIUrl":null,"url":null,"abstract":"Context: Due to the chronic nature of cancer, screening programs were a set of sequential decisions taken over time. Markov decision process (MDP) and partially observable Markov decision process (POMDP) models were the mathematical tools applied in studies, including sequential decision-making such as screening protocols for medical decision-making. Objectives: The main goal of this study was to investigate optimal policy for cancer screening using MDP and POMDP models. Methods: We performed a review of articles published within July 2000 to November 2022 in PubMed, Web of Science, and Scopus databases. The stopping age, the type of optimal strategy, the benefits of the optimal policy, and the relationship between age and risk threshold were extracted. Studies that did not use MDPs and POMDPs as the mathematical maximization models in cancer screening, review articles, editorials or commentaries, non-English articles, and those that did not focus on optimization were excluded. Results: From 532 articles, 6 studies met the study criteria. All studies suggested that the optimal policy was control-limit, and the cancer risk threshold was a non-decreasing function of age. Three studies specified a stopping age for cancer screening. In five studies, the optimal policies outperformed the guidelines or no screening strategy. Conclusions: Two essential factors in screening decisions were cancer risk and age, which were individual variables. The control-limit policy included these factors in decision-making for cancer screening. These policies highlighted personalized screening and showed that this type of screening could outperform cancer screening guidelines regarding economic and clinical benefits.","PeriodicalId":471457,"journal":{"name":"Jundishapur Journal of Chronic Disease Care","volume":"9 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cancer Screening Benefits Maximization Using Markov Decision Process Models: A Systematic Review\",\"authors\":\"Naser Mohamadkhani, Mohammad Hadian\",\"doi\":\"10.5812/jjcdc-141686\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Context: Due to the chronic nature of cancer, screening programs were a set of sequential decisions taken over time. Markov decision process (MDP) and partially observable Markov decision process (POMDP) models were the mathematical tools applied in studies, including sequential decision-making such as screening protocols for medical decision-making. Objectives: The main goal of this study was to investigate optimal policy for cancer screening using MDP and POMDP models. Methods: We performed a review of articles published within July 2000 to November 2022 in PubMed, Web of Science, and Scopus databases. The stopping age, the type of optimal strategy, the benefits of the optimal policy, and the relationship between age and risk threshold were extracted. Studies that did not use MDPs and POMDPs as the mathematical maximization models in cancer screening, review articles, editorials or commentaries, non-English articles, and those that did not focus on optimization were excluded. Results: From 532 articles, 6 studies met the study criteria. All studies suggested that the optimal policy was control-limit, and the cancer risk threshold was a non-decreasing function of age. Three studies specified a stopping age for cancer screening. In five studies, the optimal policies outperformed the guidelines or no screening strategy. Conclusions: Two essential factors in screening decisions were cancer risk and age, which were individual variables. The control-limit policy included these factors in decision-making for cancer screening. These policies highlighted personalized screening and showed that this type of screening could outperform cancer screening guidelines regarding economic and clinical benefits.\",\"PeriodicalId\":471457,\"journal\":{\"name\":\"Jundishapur Journal of Chronic Disease Care\",\"volume\":\"9 2\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Jundishapur Journal of Chronic Disease Care\",\"FirstCategoryId\":\"0\",\"ListUrlMain\":\"https://doi.org/10.5812/jjcdc-141686\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jundishapur Journal of Chronic Disease Care","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.5812/jjcdc-141686","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:由于癌症的慢性性质,筛查计划是一系列随着时间推移而做出的连续决策。马尔可夫决策过程(Markov decision process,MDP)和部分可观测马尔可夫决策过程(Partially observable Markov decision process,POMDP)模型是研究中应用的数学工具,包括医学决策筛查方案等顺序决策。研究目标本研究的主要目的是利用 MDP 和 POMDP 模型研究癌症筛查的最佳政策。研究方法我们查阅了 2000 年 7 月至 2022 年 11 月期间在 PubMed、Web of Science 和 Scopus 数据库中发表的文章。提取了停止年龄、最优策略类型、最优策略的收益以及年龄与风险阈值之间的关系。未将 MDPs 和 POMDPs 作为癌症筛查数学最大化模型的研究、综述文章、社论或评论、非英语文章以及不以优化为重点的研究均被排除在外。结果:在 532 篇文章中,有 6 项研究符合研究标准。所有研究都认为最佳政策是控制限制,癌症风险阈值是年龄的非递减函数。三项研究规定了癌症筛查的停止年龄。在五项研究中,最佳政策优于指南或无筛查策略。结论:癌症风险和年龄是筛查决策的两个重要因素,而这两个因素又是个体变量。控制限制政策将这些因素纳入癌症筛查决策中。这些政策强调了个性化筛查,并表明这种筛查在经济和临床效益方面优于癌症筛查指南。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cancer Screening Benefits Maximization Using Markov Decision Process Models: A Systematic Review
Context: Due to the chronic nature of cancer, screening programs were a set of sequential decisions taken over time. Markov decision process (MDP) and partially observable Markov decision process (POMDP) models were the mathematical tools applied in studies, including sequential decision-making such as screening protocols for medical decision-making. Objectives: The main goal of this study was to investigate optimal policy for cancer screening using MDP and POMDP models. Methods: We performed a review of articles published within July 2000 to November 2022 in PubMed, Web of Science, and Scopus databases. The stopping age, the type of optimal strategy, the benefits of the optimal policy, and the relationship between age and risk threshold were extracted. Studies that did not use MDPs and POMDPs as the mathematical maximization models in cancer screening, review articles, editorials or commentaries, non-English articles, and those that did not focus on optimization were excluded. Results: From 532 articles, 6 studies met the study criteria. All studies suggested that the optimal policy was control-limit, and the cancer risk threshold was a non-decreasing function of age. Three studies specified a stopping age for cancer screening. In five studies, the optimal policies outperformed the guidelines or no screening strategy. Conclusions: Two essential factors in screening decisions were cancer risk and age, which were individual variables. The control-limit policy included these factors in decision-making for cancer screening. These policies highlighted personalized screening and showed that this type of screening could outperform cancer screening guidelines regarding economic and clinical benefits.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信