当准确的预测模型产生有害的自我实现预言时。

IF 6.7 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wouter A C van Amsterdam, Nan van Geloven, Jesse H Krijthe, Rajesh Ranganath, Giovanni Cinà
{"title":"当准确的预测模型产生有害的自我实现预言时。","authors":"Wouter A C van Amsterdam, Nan van Geloven, Jesse H Krijthe, Rajesh Ranganath, Giovanni Cinà","doi":"10.1016/j.patter.2025.101229","DOIUrl":null,"url":null,"abstract":"<p><p>Prediction models are popular in medical research and practice. Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions, and they are frequently lauded as instruments for personalized, data-driven healthcare. We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model. Our main result is a formal characterization of a set of such prediction models. Next, we show that models that are well calibrated before and after deployment are useless for decision-making, as they make no change in the data distribution. These results call for a reconsideration of standard practices for validation and deployment of prediction models that are used in medical decisions.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"6 4","pages":"101229"},"PeriodicalIF":6.7000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010445/pdf/","citationCount":"0","resultStr":"{\"title\":\"When accurate prediction models yield harmful self-fulfilling prophecies.\",\"authors\":\"Wouter A C van Amsterdam, Nan van Geloven, Jesse H Krijthe, Rajesh Ranganath, Giovanni Cinà\",\"doi\":\"10.1016/j.patter.2025.101229\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Prediction models are popular in medical research and practice. Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions, and they are frequently lauded as instruments for personalized, data-driven healthcare. We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model. Our main result is a formal characterization of a set of such prediction models. Next, we show that models that are well calibrated before and after deployment are useless for decision-making, as they make no change in the data distribution. These results call for a reconsideration of standard practices for validation and deployment of prediction models that are used in medical decisions.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":\"6 4\",\"pages\":\"101229\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010445/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2025.101229\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2025.101229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

预测模型在医学研究和实践中很受欢迎。许多人期望通过预测患者特定的结果,这些模型有可能为治疗决策提供信息,并且它们经常被称赞为个性化、数据驱动的医疗保健工具。然而,我们表明,使用预测模型进行决策可能会导致伤害,即使预测在部署后表现出良好的辨别能力。这些模型是有害的自我实现预言:它们的部署伤害了一组患者,但这些患者的较差结果并没有减少模型的歧视。我们的主要结果是一组这样的预测模型的形式化表征。接下来,我们将证明,在部署前后经过良好校准的模型对于决策是无用的,因为它们不会改变数据分布。这些结果要求重新考虑在医疗决策中使用的预测模型的验证和部署的标准实践。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
When accurate prediction models yield harmful self-fulfilling prophecies.

Prediction models are popular in medical research and practice. Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions, and they are frequently lauded as instruments for personalized, data-driven healthcare. We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model. Our main result is a formal characterization of a set of such prediction models. Next, we show that models that are well calibrated before and after deployment are useless for decision-making, as they make no change in the data distribution. These results call for a reconsideration of standard practices for validation and deployment of prediction models that are used in medical decisions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Patterns
Patterns Decision Sciences-Decision Sciences (all)
CiteScore
10.60
自引率
4.60%
发文量
153
审稿时长
19 weeks
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信