{"title":"药物质量自适应学习与退出临床试验患者招募优化","authors":"Zhili Tian, Weidong Han, Warrren B Powell","doi":"10.1287/MSOM.2020.0936","DOIUrl":null,"url":null,"abstract":"Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"37 1 1","pages":"580-599"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Adaptive Learning of Drug Quality and Optimization of Patient Recruitment for Clinical Trials with Dropouts\",\"authors\":\"Zhili Tian, Weidong Han, Warrren B Powell\",\"doi\":\"10.1287/MSOM.2020.0936\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.\",\"PeriodicalId\":18108,\"journal\":{\"name\":\"Manuf. Serv. Oper. Manag.\",\"volume\":\"37 1 1\",\"pages\":\"580-599\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Manuf. Serv. Oper. Manag.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1287/MSOM.2020.0936\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Manuf. Serv. Oper. Manag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/MSOM.2020.0936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive Learning of Drug Quality and Optimization of Patient Recruitment for Clinical Trials with Dropouts
Problem definition: Clinical trials are crucial to new drug development. This study investigates optimal patient enrollment in clinical trials with interim analyses, which are analyses of treatment responses from patients at intermediate points. Our model considers uncertainties in patient enrollment and drug treatment effectiveness. We consider the benefits of completing a trial early and the cost of accelerating a trial by maximizing the net present value of drug cumulative profit. Academic/practical relevance: Clinical trials frequently account for the largest cost in drug development, and patient enrollment is an important problem in trial management. Our study develops a dynamic program, accurately capturing the dynamics of the problem, to optimize patient enrollment while learning the treatment effectiveness of an investigated drug. Methodology: The model explicitly captures both the physical state (enrolled patients) and belief states about the effectiveness of the investigated drug and a standard treatment drug. Using Bayesian updates and dynamic programming, we establish monotonicity of the value function in state variables and characterize an optimal enrollment policy. We also introduce, for the first time, the use of backward approximate dynamic programming (ADP) for this problem class. We illustrate the findings using a clinical trial program from a leading firm. Our study performs sensitivity analyses of the input parameters on the optimal enrollment policy. Results: The value function is monotonic in cumulative patient enrollment and the average responses of treatment for the investigated drug and standard treatment drug. The optimal enrollment policy is nondecreasing in the average response from patients using the investigated drug and is nonincreasing in cumulative patient enrollment in periods between two successive interim analyses. The forward ADP algorithm (or backward ADP algorithm) exploiting the monotonicity of the value function reduced the run time from 1.5 months using the exact method to a day (or 20 minutes) within 4% of the exact method. Through an application to a leading firm’s clinical trial program, the study demonstrates that the firm can have a sizable gain of drug profit following the optimal policy that our model provides. Managerial implications: We developed a new model for improving the management of clinical trials. Our study provides insights of an optimal policy and insights into the sensitivity of value function to the dropout rate and prior probability distribution. A firm can have a sizable gain in the drug’s profit by managing its trials using the optimal policies and the properties of value function. We illustrated that firms can use the ADP algorithms to develop their patient enrollment strategies.