A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models

H. E. Echo Wang, M. Landers, R. Adams, Adarsh Subbaswamy, Hadi Kharrazi, D. Gaskin, S. Saria
{"title":"A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models","authors":"H. E. Echo Wang, M. Landers, R. Adams, Adarsh Subbaswamy, Hadi Kharrazi, D. Gaskin, S. Saria","doi":"10.1093/jamia/ocac065","DOIUrl":null,"url":null,"abstract":"Abstract Objective Health care providers increasingly rely upon predictive algorithms when making important treatment decisions, however, evidence indicates that these tools can lead to inequitable outcomes across racial and socio-economic groups. In this study, we introduce a bias evaluation checklist that allows model developers and health care providers a means to systematically appraise a model’s potential to introduce bias. Materials and Methods Our methods include developing a bias evaluation checklist, a scoping literature review to identify 30-day hospital readmission prediction models, and assessing the selected models using the checklist. Results We selected 4 models for evaluation: LACE, HOSPITAL, Johns Hopkins ACG, and HATRIX. Our assessment identified critical ways in which these algorithms can perpetuate health care inequalities. We found that LACE and HOSPITAL have the greatest potential for introducing bias, Johns Hopkins ACG has the most areas of uncertainty, and HATRIX has the fewest causes for concern. Discussion Our approach gives model developers and health care providers a practical and systematic method for evaluating bias in predictive models. Traditional bias identification methods do not elucidate sources of bias and are thus insufficient for mitigation efforts. With our checklist, bias can be addressed and eliminated before a model is fully developed or deployed. Conclusion The potential for algorithms to perpetuate biased outcomes is not isolated to readmission prediction models; rather, we believe our results have implications for predictive models across health care. We offer a systematic method for evaluating potential bias with sufficient flexibility to be utilized across models and applications.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Medical Informatics Association : JAMIA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jamia/ocac065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

Abstract Objective Health care providers increasingly rely upon predictive algorithms when making important treatment decisions, however, evidence indicates that these tools can lead to inequitable outcomes across racial and socio-economic groups. In this study, we introduce a bias evaluation checklist that allows model developers and health care providers a means to systematically appraise a model’s potential to introduce bias. Materials and Methods Our methods include developing a bias evaluation checklist, a scoping literature review to identify 30-day hospital readmission prediction models, and assessing the selected models using the checklist. Results We selected 4 models for evaluation: LACE, HOSPITAL, Johns Hopkins ACG, and HATRIX. Our assessment identified critical ways in which these algorithms can perpetuate health care inequalities. We found that LACE and HOSPITAL have the greatest potential for introducing bias, Johns Hopkins ACG has the most areas of uncertainty, and HATRIX has the fewest causes for concern. Discussion Our approach gives model developers and health care providers a practical and systematic method for evaluating bias in predictive models. Traditional bias identification methods do not elucidate sources of bias and are thus insufficient for mitigation efforts. With our checklist, bias can be addressed and eliminated before a model is fully developed or deployed. Conclusion The potential for algorithms to perpetuate biased outcomes is not isolated to readmission prediction models; rather, we believe our results have implications for predictive models across health care. We offer a systematic method for evaluating potential bias with sufficient flexibility to be utilized across models and applications.
预测模型偏倚评估清单及其在30天住院再入院模型中的试点应用
卫生保健提供者在做出重要的治疗决策时越来越依赖于预测算法,然而,有证据表明,这些工具可能导致不同种族和社会经济群体的不公平结果。在本研究中,我们引入了一个偏差评估清单,使模型开发者和医疗保健提供者能够系统地评估模型引入偏差的可能性。材料和方法我们的方法包括制定偏倚评估清单,进行范围文献综述以确定30天医院再入院预测模型,并使用清单评估所选模型。结果我们选择了4个模型进行评价:LACE、HOSPITAL、Johns Hopkins ACG和HATRIX。我们的评估确定了这些算法可能使医疗保健不平等永久化的关键方式。我们发现,LACE和HOSPITAL最有可能引入偏倚,约翰霍普金斯ACG有最多的不确定领域,而HATRIX引起关注的原因最少。我们的方法为模型开发者和卫生保健提供者提供了一种实用和系统的方法来评估预测模型中的偏差。传统的偏见识别方法不能阐明偏见的来源,因此不足以减轻偏见的影响。有了我们的检查表,可以在模型完全开发或部署之前解决和消除偏见。结论:算法使偏倚结果持续存在的可能性并不仅限于再入院预测模型;相反,我们相信我们的结果对整个医疗保健的预测模型具有启示意义。我们提供了一种评估潜在偏差的系统方法,具有足够的灵活性,可以跨模型和应用程序使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信