在医疗资源有限的情况下,评估临床决策支持算法的准确性和公平性。

IF 4 2区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Esther L. Meerwijk , Duncan C. McElfresh , Susana Martins , Suzanne R. Tamang
{"title":"在医疗资源有限的情况下,评估临床决策支持算法的准确性和公平性。","authors":"Esther L. Meerwijk ,&nbsp;Duncan C. McElfresh ,&nbsp;Susana Martins ,&nbsp;Suzanne R. Tamang","doi":"10.1016/j.jbi.2024.104664","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><p>Guidance on how to evaluate accuracy and algorithmic fairness across subgroups is missing for clinical models that flag patients for an intervention but when health care resources to administer that intervention are limited. We aimed to propose a framework of metrics that would fit this specific use case.</p></div><div><h3>Methods</h3><p>We evaluated the following metrics and applied them to a Veterans Health Administration clinical model that flags patients for intervention who are at risk of overdose or a suicidal event among outpatients who were prescribed opioids (N = 405,817): Receiver – Operating Characteristic and area under the curve, precision – recall curve, calibration – reliability curve, false positive rate, false negative rate, and false omission rate. In addition, we developed a new approach to visualize false positives and false negatives that we named ‘per true positive bars.’ We demonstrate the utility of these metrics to our use case for three cohorts of patients at the highest risk (top 0.5 %, 1.0 %, and 5.0 %) by evaluating algorithmic fairness across the following age groups: &lt;=30, 31–50, 51–65, and &gt;65 years old.</p></div><div><h3>Results</h3><p>Metrics that allowed us to assess group differences more clearly were the false positive rate, false negative rate, false omission rate, and the new ‘per true positive bars’. Metrics with limited utility to our use case were the Receiver – Operating Characteristic and area under the curve, the calibration – reliability curve, and the precision – recall curve.</p></div><div><h3>Conclusion</h3><p>There is no “one size fits all” approach to model performance monitoring and bias analysis. Our work informs future researchers and clinicians who seek to evaluate accuracy and fairness of predictive models that identify patients to intervene on in the context of limited health care resources. In terms of ease of interpretation and utility for our use case, the new ‘per true positive bars’ may be the most intuitive to a range of stakeholders and facilitates choosing a threshold that allows weighing false positives against false negatives, which is especially important when predicting severe adverse events.</p></div>","PeriodicalId":15263,"journal":{"name":"Journal of Biomedical Informatics","volume":"156 ","pages":"Article 104664"},"PeriodicalIF":4.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating accuracy and fairness of clinical decision support algorithms when health care resources are limited\",\"authors\":\"Esther L. Meerwijk ,&nbsp;Duncan C. McElfresh ,&nbsp;Susana Martins ,&nbsp;Suzanne R. Tamang\",\"doi\":\"10.1016/j.jbi.2024.104664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><p>Guidance on how to evaluate accuracy and algorithmic fairness across subgroups is missing for clinical models that flag patients for an intervention but when health care resources to administer that intervention are limited. We aimed to propose a framework of metrics that would fit this specific use case.</p></div><div><h3>Methods</h3><p>We evaluated the following metrics and applied them to a Veterans Health Administration clinical model that flags patients for intervention who are at risk of overdose or a suicidal event among outpatients who were prescribed opioids (N = 405,817): Receiver – Operating Characteristic and area under the curve, precision – recall curve, calibration – reliability curve, false positive rate, false negative rate, and false omission rate. In addition, we developed a new approach to visualize false positives and false negatives that we named ‘per true positive bars.’ We demonstrate the utility of these metrics to our use case for three cohorts of patients at the highest risk (top 0.5 %, 1.0 %, and 5.0 %) by evaluating algorithmic fairness across the following age groups: &lt;=30, 31–50, 51–65, and &gt;65 years old.</p></div><div><h3>Results</h3><p>Metrics that allowed us to assess group differences more clearly were the false positive rate, false negative rate, false omission rate, and the new ‘per true positive bars’. Metrics with limited utility to our use case were the Receiver – Operating Characteristic and area under the curve, the calibration – reliability curve, and the precision – recall curve.</p></div><div><h3>Conclusion</h3><p>There is no “one size fits all” approach to model performance monitoring and bias analysis. Our work informs future researchers and clinicians who seek to evaluate accuracy and fairness of predictive models that identify patients to intervene on in the context of limited health care resources. In terms of ease of interpretation and utility for our use case, the new ‘per true positive bars’ may be the most intuitive to a range of stakeholders and facilitates choosing a threshold that allows weighing false positives against false negatives, which is especially important when predicting severe adverse events.</p></div>\",\"PeriodicalId\":15263,\"journal\":{\"name\":\"Journal of Biomedical Informatics\",\"volume\":\"156 \",\"pages\":\"Article 104664\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Biomedical Informatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1532046424000820\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomedical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1532046424000820","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

目的:临床模型会对患者进行干预标记,但实施干预的医疗资源有限,如何评估不同亚组的准确性和算法公平性尚缺乏指导。我们的目标是提出一个适合这种特殊情况的指标框架:我们对以下指标进行了评估,并将其应用于退伍军人健康管理局的临床模型,该模型可对开具阿片类药物处方的门诊患者中存在用药过量或自杀风险的患者(N = 405,817)进行标记干预:接收者工作特征曲线和曲线下面积、精确度-召回曲线、校准-可靠性曲线、假阳性率、假阴性率和假遗漏率。此外,我们还开发了一种可视化假阳性和假阴性的新方法,并将其命名为 "每真阳性条形图"。我们通过评估以下年龄组(65 岁)的算法公平性,展示了这些指标在我们的使用案例中对三组最高风险患者(前 0.5%、1.0% 和 5.0%)的实用性:假阳性率、假阴性率、假遗漏率和新的 "每真阳性条数 "等指标能让我们更清楚地评估组间差异。对我们的应用案例有用性有限的指标是接收者-操作特征曲线和曲线下面积、校准-可靠性曲线以及精确度-召回曲线 结论:模型性能监测和偏差分析没有 "放之四海而皆准 "的方法。我们的工作为未来的研究人员和临床医生提供了参考,他们需要评估预测模型的准确性和公平性,以便在医疗资源有限的情况下识别需要干预的患者。就我们的使用案例而言,新的 "每真阳性条数 "可能是最直观的解释,也是最有用的解释,它有助于选择一个阈值,以权衡假阳性和假阴性,这在预测严重不良事件时尤为重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Evaluating accuracy and fairness of clinical decision support algorithms when health care resources are limited

Evaluating accuracy and fairness of clinical decision support algorithms when health care resources are limited

Objective

Guidance on how to evaluate accuracy and algorithmic fairness across subgroups is missing for clinical models that flag patients for an intervention but when health care resources to administer that intervention are limited. We aimed to propose a framework of metrics that would fit this specific use case.

Methods

We evaluated the following metrics and applied them to a Veterans Health Administration clinical model that flags patients for intervention who are at risk of overdose or a suicidal event among outpatients who were prescribed opioids (N = 405,817): Receiver – Operating Characteristic and area under the curve, precision – recall curve, calibration – reliability curve, false positive rate, false negative rate, and false omission rate. In addition, we developed a new approach to visualize false positives and false negatives that we named ‘per true positive bars.’ We demonstrate the utility of these metrics to our use case for three cohorts of patients at the highest risk (top 0.5 %, 1.0 %, and 5.0 %) by evaluating algorithmic fairness across the following age groups: <=30, 31–50, 51–65, and >65 years old.

Results

Metrics that allowed us to assess group differences more clearly were the false positive rate, false negative rate, false omission rate, and the new ‘per true positive bars’. Metrics with limited utility to our use case were the Receiver – Operating Characteristic and area under the curve, the calibration – reliability curve, and the precision – recall curve.

Conclusion

There is no “one size fits all” approach to model performance monitoring and bias analysis. Our work informs future researchers and clinicians who seek to evaluate accuracy and fairness of predictive models that identify patients to intervene on in the context of limited health care resources. In terms of ease of interpretation and utility for our use case, the new ‘per true positive bars’ may be the most intuitive to a range of stakeholders and facilitates choosing a threshold that allows weighing false positives against false negatives, which is especially important when predicting severe adverse events.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Biomedical Informatics
Journal of Biomedical Informatics 医学-计算机:跨学科应用
CiteScore
8.90
自引率
6.70%
发文量
243
审稿时长
32 days
期刊介绍: The Journal of Biomedical Informatics reflects a commitment to high-quality original research papers, reviews, and commentaries in the area of biomedical informatics methodology. Although we publish articles motivated by applications in the biomedical sciences (for example, clinical medicine, health care, population health, and translational bioinformatics), the journal emphasizes reports of new methodologies and techniques that have general applicability and that form the basis for the evolving science of biomedical informatics. Articles on medical devices; evaluations of implemented systems (including clinical trials of information technologies); or papers that provide insight into a biological process, a specific disease, or treatment options would generally be more suitable for publication in other venues. Papers on applications of signal processing and image analysis are often more suitable for biomedical engineering journals or other informatics journals, although we do publish papers that emphasize the information management and knowledge representation/modeling issues that arise in the storage and use of biological signals and images. System descriptions are welcome if they illustrate and substantiate the underlying methodology that is the principal focus of the report and an effort is made to address the generalizability and/or range of application of that methodology. Note also that, given the international nature of JBI, papers that deal with specific languages other than English, or with country-specific health systems or approaches, are acceptable for JBI only if they offer generalizable lessons that are relevant to the broad JBI readership, regardless of their country, language, culture, or health system.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信