Melissa D McCradden, Mjaye L Mazwi, Lauren Oakden-Rayner
{"title":"一个准确的模型会是坏的吗?","authors":"Melissa D McCradden, Mjaye L Mazwi, Lauren Oakden-Rayner","doi":"10.1016/j.patter.2025.101205","DOIUrl":null,"url":null,"abstract":"<p><p>Outcome-prediction models can harm patients even when they have good accuracy, as shown in a recent <i>Patterns</i> paper by Van Amsterdam et al. In this preview, we consider the ethical and empirical implications of this work by highlighting the impact of reifying self-fulfilling prophecies and propose a reorientation toward actions over accuracy as a priority for AI integration.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"6 4","pages":"101205"},"PeriodicalIF":6.7000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010442/pdf/","citationCount":"0","resultStr":"{\"title\":\"Can an accurate model be bad?\",\"authors\":\"Melissa D McCradden, Mjaye L Mazwi, Lauren Oakden-Rayner\",\"doi\":\"10.1016/j.patter.2025.101205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Outcome-prediction models can harm patients even when they have good accuracy, as shown in a recent <i>Patterns</i> paper by Van Amsterdam et al. In this preview, we consider the ethical and empirical implications of this work by highlighting the impact of reifying self-fulfilling prophecies and propose a reorientation toward actions over accuracy as a priority for AI integration.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":\"6 4\",\"pages\":\"101205\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12010442/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2025.101205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2025.101205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Outcome-prediction models can harm patients even when they have good accuracy, as shown in a recent Patterns paper by Van Amsterdam et al. In this preview, we consider the ethical and empirical implications of this work by highlighting the impact of reifying self-fulfilling prophecies and propose a reorientation toward actions over accuracy as a priority for AI integration.