摘要IA02:建立和验证绝对风险模型的简要概述

R. Pfeiffer
{"title":"摘要IA02:建立和验证绝对风险模型的简要概述","authors":"R. Pfeiffer","doi":"10.1158/1538-7755.CARISK16-IA02","DOIUrl":null,"url":null,"abstract":"Statistical models that predict disease incidence, disease recurrence or mortality following disease onset have broad public health and clinical applications. Of great importance are models that predict absolute risk, namely the probability of a particular outcome, e.g. breast cancer, in the presence of competing causes of mortality. Although relative risks are useful for assessing the strength of risk factors, they are not nearly as useful as absolute risks for making clinical decisions or establishing policies for disease prevention. That is because such decisions or policies often weigh the favorable effects of an intervention on the disease of interest against the unfavorable effects that the intervention might have on other health outcomes. The common currency for such decisions is the (possibly weighted) absolute risk for each of the health outcomes in the presence and absence of intervention. First, I discuss various approaches to building absolute risk models from various data sources and illustrate them with absolute risk models for breast cancer and thyroid cancer. Before a risk prediction model can be recommended for clinical or public health applications, one needs to assess how good the predictions are. I will give an overview over various criteria for assessing the performance of a risk model. I assume that we have developed a risk model on training data and assess the performance of the model on independent test or validation data. This approach, termed external validation, provides a more rigorous assessment of the model than testing the model on the training data (internal validation); even though cross-validation techniques are available to reduce the over-optimism bias that can result from testing the model on the training data. I present general criteria for model assessment, such as calibration, predictive accuracy and classification accuracy, and discriminatory accuracy. Calibration measures how well the numbers of events predicted by a model agree with the observed events that arise in a cohort. Calibration is the most important general criterion, because if a model is not well calibrated, other criteria, such as discrimination, can be misleading. Discriminatory accuracy measures how well separated the distributions of risk are for cases and non-cases. Another approach is to tailor the criterion to the particular application. I will also present novel criteria for screening applications or high risk interventions. If losses can be specified in a well-defined decision problem, I will show how models can be assessed with respect to how much they reduce expected loss. Citation Format: Ruth Pfeiffer. A brief overview of building and validating absolute risk models. [abstract]. In: Proceedings of the AACR Special Conference: Improving Cancer Risk Prediction for Prevention and Early Detection; Nov 16-19, 2016; Orlando, FL. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2017;26(5 Suppl):Abstract nr IA02.","PeriodicalId":9487,"journal":{"name":"Cancer Epidemiology and Prevention Biomarkers","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abstract IA02: A brief overview of building and validating absolute risk models\",\"authors\":\"R. Pfeiffer\",\"doi\":\"10.1158/1538-7755.CARISK16-IA02\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Statistical models that predict disease incidence, disease recurrence or mortality following disease onset have broad public health and clinical applications. Of great importance are models that predict absolute risk, namely the probability of a particular outcome, e.g. breast cancer, in the presence of competing causes of mortality. Although relative risks are useful for assessing the strength of risk factors, they are not nearly as useful as absolute risks for making clinical decisions or establishing policies for disease prevention. That is because such decisions or policies often weigh the favorable effects of an intervention on the disease of interest against the unfavorable effects that the intervention might have on other health outcomes. The common currency for such decisions is the (possibly weighted) absolute risk for each of the health outcomes in the presence and absence of intervention. First, I discuss various approaches to building absolute risk models from various data sources and illustrate them with absolute risk models for breast cancer and thyroid cancer. Before a risk prediction model can be recommended for clinical or public health applications, one needs to assess how good the predictions are. I will give an overview over various criteria for assessing the performance of a risk model. I assume that we have developed a risk model on training data and assess the performance of the model on independent test or validation data. This approach, termed external validation, provides a more rigorous assessment of the model than testing the model on the training data (internal validation); even though cross-validation techniques are available to reduce the over-optimism bias that can result from testing the model on the training data. I present general criteria for model assessment, such as calibration, predictive accuracy and classification accuracy, and discriminatory accuracy. Calibration measures how well the numbers of events predicted by a model agree with the observed events that arise in a cohort. Calibration is the most important general criterion, because if a model is not well calibrated, other criteria, such as discrimination, can be misleading. Discriminatory accuracy measures how well separated the distributions of risk are for cases and non-cases. Another approach is to tailor the criterion to the particular application. I will also present novel criteria for screening applications or high risk interventions. If losses can be specified in a well-defined decision problem, I will show how models can be assessed with respect to how much they reduce expected loss. Citation Format: Ruth Pfeiffer. A brief overview of building and validating absolute risk models. [abstract]. In: Proceedings of the AACR Special Conference: Improving Cancer Risk Prediction for Prevention and Early Detection; Nov 16-19, 2016; Orlando, FL. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2017;26(5 Suppl):Abstract nr IA02.\",\"PeriodicalId\":9487,\"journal\":{\"name\":\"Cancer Epidemiology and Prevention Biomarkers\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cancer Epidemiology and Prevention Biomarkers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1158/1538-7755.CARISK16-IA02\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer Epidemiology and Prevention Biomarkers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1158/1538-7755.CARISK16-IA02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

预测疾病发病率、疾病复发或疾病发病后死亡率的统计模型具有广泛的公共卫生和临床应用。非常重要的是预测绝对风险的模型,即在存在相互竞争的死亡原因的情况下出现特定结果(例如乳腺癌)的概率。尽管相对风险对评估风险因素的强度有用,但在做出临床决定或制定疾病预防政策方面,它们远不如绝对风险有用。这是因为这样的决定或政策经常权衡干预对相关疾病的有利影响和干预对其他健康结果可能产生的不利影响。这类决策的共同依据是(可能加权的)干预存在和不存在情况下每种健康结果的绝对风险。首先,我讨论了从各种数据源建立绝对风险模型的各种方法,并举例说明了乳腺癌和甲状腺癌的绝对风险模型。在将风险预测模型推荐用于临床或公共卫生应用之前,需要评估预测的准确性。我将概述评估风险模型性能的各种标准。我假设我们已经在训练数据上开发了一个风险模型,并在独立的测试或验证数据上评估模型的性能。这种方法被称为外部验证,它提供了比在训练数据上测试模型(内部验证)更严格的模型评估;尽管交叉验证技术可以减少由于在训练数据上测试模型而导致的过度乐观偏见。我提出了模型评估的一般标准,如校准,预测精度和分类精度,以及区分精度。校准测量的是由模型预测的事件数量与在队列中出现的观察到的事件的一致程度。校准是最重要的一般标准,因为如果一个模型没有很好地校准,其他标准,如歧视,可能会产生误导。歧视性准确性衡量的是案例和非案例之间风险分布的分离程度。另一种方法是为特定的应用程序定制标准。我还将提出筛选应用或高风险干预措施的新标准。如果损失可以在定义良好的决策问题中指定,那么我将展示如何根据模型减少预期损失的程度来评估模型。引文格式:Ruth Pfeiffer。构建和验证绝对风险模型的简要概述。[摘要]。摘自:AACR特别会议论文集:改进癌症风险预测以预防和早期发现;2016年11月16日至19日;费城(PA): AACR;癌症流行病学与生物标志物[j]; 2017;26(5增刊):摘要nr IA02。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Abstract IA02: A brief overview of building and validating absolute risk models
Statistical models that predict disease incidence, disease recurrence or mortality following disease onset have broad public health and clinical applications. Of great importance are models that predict absolute risk, namely the probability of a particular outcome, e.g. breast cancer, in the presence of competing causes of mortality. Although relative risks are useful for assessing the strength of risk factors, they are not nearly as useful as absolute risks for making clinical decisions or establishing policies for disease prevention. That is because such decisions or policies often weigh the favorable effects of an intervention on the disease of interest against the unfavorable effects that the intervention might have on other health outcomes. The common currency for such decisions is the (possibly weighted) absolute risk for each of the health outcomes in the presence and absence of intervention. First, I discuss various approaches to building absolute risk models from various data sources and illustrate them with absolute risk models for breast cancer and thyroid cancer. Before a risk prediction model can be recommended for clinical or public health applications, one needs to assess how good the predictions are. I will give an overview over various criteria for assessing the performance of a risk model. I assume that we have developed a risk model on training data and assess the performance of the model on independent test or validation data. This approach, termed external validation, provides a more rigorous assessment of the model than testing the model on the training data (internal validation); even though cross-validation techniques are available to reduce the over-optimism bias that can result from testing the model on the training data. I present general criteria for model assessment, such as calibration, predictive accuracy and classification accuracy, and discriminatory accuracy. Calibration measures how well the numbers of events predicted by a model agree with the observed events that arise in a cohort. Calibration is the most important general criterion, because if a model is not well calibrated, other criteria, such as discrimination, can be misleading. Discriminatory accuracy measures how well separated the distributions of risk are for cases and non-cases. Another approach is to tailor the criterion to the particular application. I will also present novel criteria for screening applications or high risk interventions. If losses can be specified in a well-defined decision problem, I will show how models can be assessed with respect to how much they reduce expected loss. Citation Format: Ruth Pfeiffer. A brief overview of building and validating absolute risk models. [abstract]. In: Proceedings of the AACR Special Conference: Improving Cancer Risk Prediction for Prevention and Early Detection; Nov 16-19, 2016; Orlando, FL. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2017;26(5 Suppl):Abstract nr IA02.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信