Winnie Chen, Martin Howell, Alan Cass, Gillian Gorham, Kirsten Howard
{"title":"了解经济评价模型:临床医生读者指南》。","authors":"Winnie Chen, Martin Howell, Alan Cass, Gillian Gorham, Kirsten Howard","doi":"10.5694/mja2.52409","DOIUrl":null,"url":null,"abstract":"<p>Economic evaluations have a long history in health care.<span><sup>1, 2</sup></span> Full economic evaluations aim to inform decision making through comparing the costs and outcomes of two or more interventions, strategies, programs or policies, to estimate their efficiency via an incremental cost-effectiveness ratio. The premise for conducting economic evaluations is that health care resources are finite, and there is an opportunity cost when resources are allocated to one health care intervention over another.<span><sup>3, 4</sup></span> In Australia, economic evaluations are important considerations in policy decisions on what should be publicly funded under the Pharmaceutical Benefits Scheme (PBS)<span><sup>5</sup></span> and Medicare Benefits Schedule (MBS).<span><sup>6</sup></span> Furthermore, clinician–researchers are increasingly considering both clinical effectiveness and cost-effectiveness in evaluation studies and funding applications.</p><p>Full economic evaluations are classified by the type of evaluation, with the most common types being cost-effectiveness analysis (CEA), cost–utility analysis (CUA) and cost–benefit analysis (CBA).<span><sup>3</sup></span> The method for conducting these economic evaluations can be study-based or decision–analytic model-based, or both.<span><sup>7</sup></span> Modelled economic evaluations can overcome some of the limitations associated with study-based economic evaluations.<span><sup>7, 8</sup></span> The complexity and use of modelled evaluations has increased with improved computing power and data availability. Several published articles offer clinicians an introduction to economic evaluations,<span><sup>1, 9, 10</sup></span> but few to date have focused on modelled evaluations.<span><sup>11, 12</sup></span> In this key research skills article, we aim to improve clinician understanding of modelled evaluations. In the Supporting Information, we illustrate key modelling concepts using two recently published models in the <i>Medical Journal of Australia</i>.</p><p>Why are model-based evaluations done? Modelled evaluations can be performed alongside empiric studies or as standalone studies. Modelled evaluations do not replace study-based evaluations — rather, they enable evidence synthesis across multiple studies into relevant decision-making contexts, extrapolation of trial-based results beyond the time horizon, and hypothesis generation where data are unavailable.<span><sup>7, 8, 13</sup></span> Box 1 outlines key areas in which study-based and model-based economic evaluations differ.</p><p>We use an example of a randomised controlled trial (RCT) calculating the cost-effectiveness of a new blood pressure medicine compared with placebo to illustrate why modelled evaluations might be required. Firstly, the model can be used to synthesise evidence across multiple trials for decision making.<span><sup>8</sup></span> Whereas the RCT is only comparing the new medicine against the placebo, a modelled evaluation can extend this to compare the new medication against several commonly used blood pressure agents not included in the trial.</p><p>Secondly, models can extrapolate findings beyond the trial-based follow-up period (time horizon). If the above RCT was conducted over two years, the quality-adjusted life years (QALYs; Box 2) gained through improved blood pressure management are likely to be accrued over an individual's lifetime (eg, long term reduction in cardiovascular events) rather than simply within the two-year follow-up period of the RCT. Therefore, the model can be used to extrapolate cost-effectiveness to an appropriate time horizon.<span><sup>7</sup></span></p><p>Thirdly, using cost-effectiveness results from a single trial can be problematic if the trial is not generalisable to the population or health care setting in which the decision is being made.<span><sup>7</sup></span> For example, if the RCT was done in the United States, a modelled evaluation can use Australian estimates of health care use and costs to assist with decision making on whether the new medication is cost-effective in our local context.</p><p>The type of decision–analytic model used depends on several factors, including the decision context, data availability, and interpretability of the model.<span><sup>15</sup></span> Below we cover several common model types including simple decision trees, Markov cohort, and Markov microsimulation models. Box 3 summarises common terms used in modelled economic evaluation.</p><p>A model should not only provide results on the incremental cost-effectiveness of an intervention, but also address the question of which assumptions would result in a higher or lower ICER. Uncertainty of cost-effectiveness results due to uncertainty in model assumptions is unavoidable.<span><sup>8, 13, 18</sup></span> Uncertainty can be broadly categorised into methodological, structural, and parameter uncertainty. Methodological uncertainty refers to methodological selection of model parameters, such as decisions on time horizon or cycle length. Structural uncertainty refers to uncertainty around the model structure, such as deciding which health states are chosen to reflect the disease process.<span><sup>18</sup></span> Parameter uncertainty refers to uncertainty around input data, including probability of an event, costs, and health outcome estimates.<span><sup>8</sup></span> For example, parameter uncertainty may present in the form of standard deviations or confidence intervals around a mean; or may arise as a result of multiple literature estimates of parameter values.</p><p>A sensitivity analysis assesses the impact of parameter uncertainty. Deterministic sensitivity analysis includes one-way, two-way, and multi-way sensitivity analyses, and these involve varying the inputs for one or more parameters, and seeing the influence those changes have on the costs, outcomes and ICERs.<span><sup>19</sup></span> For example, in Box 5, the probability of death is 5% in people with CVD. The one-way sensitivity analysis can examine how ICER changes when a range of alternative probabilities from 3% to 7% are used instead of 5%. Deterministic sensitivity analysis is useful in identifying which parameters have the largest effect on costs, health outcomes, and cost-effectiveness, and whether the ICER varies from cost-effective to not cost-effective with the parameter changes.</p><p>Probability sensitivity analysis (PSA) involves varying one or more parameters by entering them as a distribution, rather than as a single fixed value.<span><sup>19</sup></span> This is also known as second-order Monte Carlo simulation. If we conduct a PSA for the Markov cohort model in Box 5, the parameters (probabilities, costs, outcomes) are entered as distributions rather than fixed values. For example, instead of entering an annual probability of death of 5% for people with CVD, we now enter the probability of death as a distribution with a mean value of 0.05. The model is then run multiple times (eg, 1000 runs) and a new value for the transition probability is selected from this distribution each time. Visually, PSA results can be plotted on a cost-effectiveness plane, with the ICER from each run represented as a single dot (Box 7). The diagonal dotted line in Box 7 represents an arbitrary willingness to pay threshold of $50 000 per QALY. Red dots represent each run that is above the willingness to pay threshold (considered not cost-effective), whereas green dots represent each run that is below the willingness to pay threshold (considered cost-effective). The larger green circle represents a 95% confidence interval around the estimated ICER of $75 742 per QALY.</p><p>We apply our understanding of modelled economic evaluations to published models, using examples from two 2023 articles published in the <i>Medical Journal of Australia</i>. The Markov cohort model by Xiao and colleagues examines the cost-effectiveness of chronic hepatitis B screening strategies against usual care.<span><sup>20</sup></span> The Markov microsimulation model by Venkataraman and colleagues examines the cost-effectiveness of several risk score and coronary artery calcium score-based strategies for initiating statin therapy.<span><sup>21</sup></span> We have summarised key information from these two modelled evaluations in the Supporting Information, table. This table seeks to highlight key concepts rather than apply a relevant critical appraisal tool such as CHEERS (Consolidated Health Economic Evaluation Reporting Standards) or similar.<span><sup>22-24</sup></span></p><p>In conclusion, understanding modelled economic evaluations is valuable for clinicians involved in health research or policy decisions. We encourage readers interested in health economics to access in-depth resources, which include worked examples on how to construct a model.<span><sup>4, 8, 15</sup></span></p><p>Open access publishing facilitated by Charles Darwin University, as part of the Wiley - Charles Darwin University agreement via the Council of Australian University Librarians.</p><p>No relevant disclosures.</p><p>Not commissioned; externally peer reviewed.</p>","PeriodicalId":18214,"journal":{"name":"Medical Journal of Australia","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.5694/mja2.52409","citationCount":"0","resultStr":"{\"title\":\"Understanding modelled economic evaluations: a reader's guide for clinicians\",\"authors\":\"Winnie Chen, Martin Howell, Alan Cass, Gillian Gorham, Kirsten Howard\",\"doi\":\"10.5694/mja2.52409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Economic evaluations have a long history in health care.<span><sup>1, 2</sup></span> Full economic evaluations aim to inform decision making through comparing the costs and outcomes of two or more interventions, strategies, programs or policies, to estimate their efficiency via an incremental cost-effectiveness ratio. The premise for conducting economic evaluations is that health care resources are finite, and there is an opportunity cost when resources are allocated to one health care intervention over another.<span><sup>3, 4</sup></span> In Australia, economic evaluations are important considerations in policy decisions on what should be publicly funded under the Pharmaceutical Benefits Scheme (PBS)<span><sup>5</sup></span> and Medicare Benefits Schedule (MBS).<span><sup>6</sup></span> Furthermore, clinician–researchers are increasingly considering both clinical effectiveness and cost-effectiveness in evaluation studies and funding applications.</p><p>Full economic evaluations are classified by the type of evaluation, with the most common types being cost-effectiveness analysis (CEA), cost–utility analysis (CUA) and cost–benefit analysis (CBA).<span><sup>3</sup></span> The method for conducting these economic evaluations can be study-based or decision–analytic model-based, or both.<span><sup>7</sup></span> Modelled economic evaluations can overcome some of the limitations associated with study-based economic evaluations.<span><sup>7, 8</sup></span> The complexity and use of modelled evaluations has increased with improved computing power and data availability. Several published articles offer clinicians an introduction to economic evaluations,<span><sup>1, 9, 10</sup></span> but few to date have focused on modelled evaluations.<span><sup>11, 12</sup></span> In this key research skills article, we aim to improve clinician understanding of modelled evaluations. In the Supporting Information, we illustrate key modelling concepts using two recently published models in the <i>Medical Journal of Australia</i>.</p><p>Why are model-based evaluations done? Modelled evaluations can be performed alongside empiric studies or as standalone studies. Modelled evaluations do not replace study-based evaluations — rather, they enable evidence synthesis across multiple studies into relevant decision-making contexts, extrapolation of trial-based results beyond the time horizon, and hypothesis generation where data are unavailable.<span><sup>7, 8, 13</sup></span> Box 1 outlines key areas in which study-based and model-based economic evaluations differ.</p><p>We use an example of a randomised controlled trial (RCT) calculating the cost-effectiveness of a new blood pressure medicine compared with placebo to illustrate why modelled evaluations might be required. Firstly, the model can be used to synthesise evidence across multiple trials for decision making.<span><sup>8</sup></span> Whereas the RCT is only comparing the new medicine against the placebo, a modelled evaluation can extend this to compare the new medication against several commonly used blood pressure agents not included in the trial.</p><p>Secondly, models can extrapolate findings beyond the trial-based follow-up period (time horizon). If the above RCT was conducted over two years, the quality-adjusted life years (QALYs; Box 2) gained through improved blood pressure management are likely to be accrued over an individual's lifetime (eg, long term reduction in cardiovascular events) rather than simply within the two-year follow-up period of the RCT. Therefore, the model can be used to extrapolate cost-effectiveness to an appropriate time horizon.<span><sup>7</sup></span></p><p>Thirdly, using cost-effectiveness results from a single trial can be problematic if the trial is not generalisable to the population or health care setting in which the decision is being made.<span><sup>7</sup></span> For example, if the RCT was done in the United States, a modelled evaluation can use Australian estimates of health care use and costs to assist with decision making on whether the new medication is cost-effective in our local context.</p><p>The type of decision–analytic model used depends on several factors, including the decision context, data availability, and interpretability of the model.<span><sup>15</sup></span> Below we cover several common model types including simple decision trees, Markov cohort, and Markov microsimulation models. Box 3 summarises common terms used in modelled economic evaluation.</p><p>A model should not only provide results on the incremental cost-effectiveness of an intervention, but also address the question of which assumptions would result in a higher or lower ICER. Uncertainty of cost-effectiveness results due to uncertainty in model assumptions is unavoidable.<span><sup>8, 13, 18</sup></span> Uncertainty can be broadly categorised into methodological, structural, and parameter uncertainty. Methodological uncertainty refers to methodological selection of model parameters, such as decisions on time horizon or cycle length. Structural uncertainty refers to uncertainty around the model structure, such as deciding which health states are chosen to reflect the disease process.<span><sup>18</sup></span> Parameter uncertainty refers to uncertainty around input data, including probability of an event, costs, and health outcome estimates.<span><sup>8</sup></span> For example, parameter uncertainty may present in the form of standard deviations or confidence intervals around a mean; or may arise as a result of multiple literature estimates of parameter values.</p><p>A sensitivity analysis assesses the impact of parameter uncertainty. Deterministic sensitivity analysis includes one-way, two-way, and multi-way sensitivity analyses, and these involve varying the inputs for one or more parameters, and seeing the influence those changes have on the costs, outcomes and ICERs.<span><sup>19</sup></span> For example, in Box 5, the probability of death is 5% in people with CVD. The one-way sensitivity analysis can examine how ICER changes when a range of alternative probabilities from 3% to 7% are used instead of 5%. Deterministic sensitivity analysis is useful in identifying which parameters have the largest effect on costs, health outcomes, and cost-effectiveness, and whether the ICER varies from cost-effective to not cost-effective with the parameter changes.</p><p>Probability sensitivity analysis (PSA) involves varying one or more parameters by entering them as a distribution, rather than as a single fixed value.<span><sup>19</sup></span> This is also known as second-order Monte Carlo simulation. If we conduct a PSA for the Markov cohort model in Box 5, the parameters (probabilities, costs, outcomes) are entered as distributions rather than fixed values. For example, instead of entering an annual probability of death of 5% for people with CVD, we now enter the probability of death as a distribution with a mean value of 0.05. The model is then run multiple times (eg, 1000 runs) and a new value for the transition probability is selected from this distribution each time. Visually, PSA results can be plotted on a cost-effectiveness plane, with the ICER from each run represented as a single dot (Box 7). The diagonal dotted line in Box 7 represents an arbitrary willingness to pay threshold of $50 000 per QALY. Red dots represent each run that is above the willingness to pay threshold (considered not cost-effective), whereas green dots represent each run that is below the willingness to pay threshold (considered cost-effective). The larger green circle represents a 95% confidence interval around the estimated ICER of $75 742 per QALY.</p><p>We apply our understanding of modelled economic evaluations to published models, using examples from two 2023 articles published in the <i>Medical Journal of Australia</i>. The Markov cohort model by Xiao and colleagues examines the cost-effectiveness of chronic hepatitis B screening strategies against usual care.<span><sup>20</sup></span> The Markov microsimulation model by Venkataraman and colleagues examines the cost-effectiveness of several risk score and coronary artery calcium score-based strategies for initiating statin therapy.<span><sup>21</sup></span> We have summarised key information from these two modelled evaluations in the Supporting Information, table. This table seeks to highlight key concepts rather than apply a relevant critical appraisal tool such as CHEERS (Consolidated Health Economic Evaluation Reporting Standards) or similar.<span><sup>22-24</sup></span></p><p>In conclusion, understanding modelled economic evaluations is valuable for clinicians involved in health research or policy decisions. We encourage readers interested in health economics to access in-depth resources, which include worked examples on how to construct a model.<span><sup>4, 8, 15</sup></span></p><p>Open access publishing facilitated by Charles Darwin University, as part of the Wiley - Charles Darwin University agreement via the Council of Australian University Librarians.</p><p>No relevant disclosures.</p><p>Not commissioned; externally peer reviewed.</p>\",\"PeriodicalId\":18214,\"journal\":{\"name\":\"Medical Journal of Australia\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.5694/mja2.52409\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Journal of Australia\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.5694/mja2.52409\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Journal of Australia","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.5694/mja2.52409","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Understanding modelled economic evaluations: a reader's guide for clinicians
Economic evaluations have a long history in health care.1, 2 Full economic evaluations aim to inform decision making through comparing the costs and outcomes of two or more interventions, strategies, programs or policies, to estimate their efficiency via an incremental cost-effectiveness ratio. The premise for conducting economic evaluations is that health care resources are finite, and there is an opportunity cost when resources are allocated to one health care intervention over another.3, 4 In Australia, economic evaluations are important considerations in policy decisions on what should be publicly funded under the Pharmaceutical Benefits Scheme (PBS)5 and Medicare Benefits Schedule (MBS).6 Furthermore, clinician–researchers are increasingly considering both clinical effectiveness and cost-effectiveness in evaluation studies and funding applications.
Full economic evaluations are classified by the type of evaluation, with the most common types being cost-effectiveness analysis (CEA), cost–utility analysis (CUA) and cost–benefit analysis (CBA).3 The method for conducting these economic evaluations can be study-based or decision–analytic model-based, or both.7 Modelled economic evaluations can overcome some of the limitations associated with study-based economic evaluations.7, 8 The complexity and use of modelled evaluations has increased with improved computing power and data availability. Several published articles offer clinicians an introduction to economic evaluations,1, 9, 10 but few to date have focused on modelled evaluations.11, 12 In this key research skills article, we aim to improve clinician understanding of modelled evaluations. In the Supporting Information, we illustrate key modelling concepts using two recently published models in the Medical Journal of Australia.
Why are model-based evaluations done? Modelled evaluations can be performed alongside empiric studies or as standalone studies. Modelled evaluations do not replace study-based evaluations — rather, they enable evidence synthesis across multiple studies into relevant decision-making contexts, extrapolation of trial-based results beyond the time horizon, and hypothesis generation where data are unavailable.7, 8, 13 Box 1 outlines key areas in which study-based and model-based economic evaluations differ.
We use an example of a randomised controlled trial (RCT) calculating the cost-effectiveness of a new blood pressure medicine compared with placebo to illustrate why modelled evaluations might be required. Firstly, the model can be used to synthesise evidence across multiple trials for decision making.8 Whereas the RCT is only comparing the new medicine against the placebo, a modelled evaluation can extend this to compare the new medication against several commonly used blood pressure agents not included in the trial.
Secondly, models can extrapolate findings beyond the trial-based follow-up period (time horizon). If the above RCT was conducted over two years, the quality-adjusted life years (QALYs; Box 2) gained through improved blood pressure management are likely to be accrued over an individual's lifetime (eg, long term reduction in cardiovascular events) rather than simply within the two-year follow-up period of the RCT. Therefore, the model can be used to extrapolate cost-effectiveness to an appropriate time horizon.7
Thirdly, using cost-effectiveness results from a single trial can be problematic if the trial is not generalisable to the population or health care setting in which the decision is being made.7 For example, if the RCT was done in the United States, a modelled evaluation can use Australian estimates of health care use and costs to assist with decision making on whether the new medication is cost-effective in our local context.
The type of decision–analytic model used depends on several factors, including the decision context, data availability, and interpretability of the model.15 Below we cover several common model types including simple decision trees, Markov cohort, and Markov microsimulation models. Box 3 summarises common terms used in modelled economic evaluation.
A model should not only provide results on the incremental cost-effectiveness of an intervention, but also address the question of which assumptions would result in a higher or lower ICER. Uncertainty of cost-effectiveness results due to uncertainty in model assumptions is unavoidable.8, 13, 18 Uncertainty can be broadly categorised into methodological, structural, and parameter uncertainty. Methodological uncertainty refers to methodological selection of model parameters, such as decisions on time horizon or cycle length. Structural uncertainty refers to uncertainty around the model structure, such as deciding which health states are chosen to reflect the disease process.18 Parameter uncertainty refers to uncertainty around input data, including probability of an event, costs, and health outcome estimates.8 For example, parameter uncertainty may present in the form of standard deviations or confidence intervals around a mean; or may arise as a result of multiple literature estimates of parameter values.
A sensitivity analysis assesses the impact of parameter uncertainty. Deterministic sensitivity analysis includes one-way, two-way, and multi-way sensitivity analyses, and these involve varying the inputs for one or more parameters, and seeing the influence those changes have on the costs, outcomes and ICERs.19 For example, in Box 5, the probability of death is 5% in people with CVD. The one-way sensitivity analysis can examine how ICER changes when a range of alternative probabilities from 3% to 7% are used instead of 5%. Deterministic sensitivity analysis is useful in identifying which parameters have the largest effect on costs, health outcomes, and cost-effectiveness, and whether the ICER varies from cost-effective to not cost-effective with the parameter changes.
Probability sensitivity analysis (PSA) involves varying one or more parameters by entering them as a distribution, rather than as a single fixed value.19 This is also known as second-order Monte Carlo simulation. If we conduct a PSA for the Markov cohort model in Box 5, the parameters (probabilities, costs, outcomes) are entered as distributions rather than fixed values. For example, instead of entering an annual probability of death of 5% for people with CVD, we now enter the probability of death as a distribution with a mean value of 0.05. The model is then run multiple times (eg, 1000 runs) and a new value for the transition probability is selected from this distribution each time. Visually, PSA results can be plotted on a cost-effectiveness plane, with the ICER from each run represented as a single dot (Box 7). The diagonal dotted line in Box 7 represents an arbitrary willingness to pay threshold of $50 000 per QALY. Red dots represent each run that is above the willingness to pay threshold (considered not cost-effective), whereas green dots represent each run that is below the willingness to pay threshold (considered cost-effective). The larger green circle represents a 95% confidence interval around the estimated ICER of $75 742 per QALY.
We apply our understanding of modelled economic evaluations to published models, using examples from two 2023 articles published in the Medical Journal of Australia. The Markov cohort model by Xiao and colleagues examines the cost-effectiveness of chronic hepatitis B screening strategies against usual care.20 The Markov microsimulation model by Venkataraman and colleagues examines the cost-effectiveness of several risk score and coronary artery calcium score-based strategies for initiating statin therapy.21 We have summarised key information from these two modelled evaluations in the Supporting Information, table. This table seeks to highlight key concepts rather than apply a relevant critical appraisal tool such as CHEERS (Consolidated Health Economic Evaluation Reporting Standards) or similar.22-24
In conclusion, understanding modelled economic evaluations is valuable for clinicians involved in health research or policy decisions. We encourage readers interested in health economics to access in-depth resources, which include worked examples on how to construct a model.4, 8, 15
Open access publishing facilitated by Charles Darwin University, as part of the Wiley - Charles Darwin University agreement via the Council of Australian University Librarians.
期刊介绍:
The Medical Journal of Australia (MJA) stands as Australia's foremost general medical journal, leading the dissemination of high-quality research and commentary to shape health policy and influence medical practices within the country. Under the leadership of Professor Virginia Barbour, the expert editorial team at MJA is dedicated to providing authors with a constructive and collaborative peer-review and publication process. Established in 1914, the MJA has evolved into a modern journal that upholds its founding values, maintaining a commitment to supporting the medical profession by delivering high-quality and pertinent information essential to medical practice.