FairICP:使用归纳适形预测在实施后临床决策支持的护理点识别偏差并增加透明度。

IF 4.6 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xiaotan Sun, Makiya Nakashima, Christopher Nguyen, Po-Hao Chen, W H Wilson Tang, Deborah Kwon, David Chen
{"title":"FairICP:使用归纳适形预测在实施后临床决策支持的护理点识别偏差并增加透明度。","authors":"Xiaotan Sun, Makiya Nakashima, Christopher Nguyen, Po-Hao Chen, W H Wilson Tang, Deborah Kwon, David Chen","doi":"10.1093/jamia/ocaf095","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Fairness concerns stemming from known and unknown biases in healthcare practices have raised questions about the trustworthiness of Artificial Intelligence (AI)-driven Clinical Decision Support Systems (CDSS). Studies have shown unforeseen performance disparities in subpopulations when applied to clinical settings different from training. Existing unfairness mitigation strategies often struggle with scalability and accessibility, while their pursuit of group-level prediction performance parity does not effectively translate into fairness at the point of care. This study introduces FairICP, a flexible and cost-effective post-implementation framework based on Inductive Conformal Prediction (ICP), to provide users with actionable knowledge of model uncertainty due to subpopulation level biases at the point of care.</p><p><strong>Materials and methods: </strong>FairICP applies ICP to identify the model's scope of competence through group specific calibration, ensuring equitable prediction reliability by filtering predictions that fall within the trusted competence boundaries. We evaluated FairICP against four benchmarks on three medical imaging modalities: (1) Cardiac Magnetic Resonance Imaging (MRI), (2) Chest X-ray and (3) Dermatology Imaging, acquired from both private and large public datasets. Frameworks are assessed on prediction performance enhancement and unfairness mitigation capabilities.</p><p><strong>Results: </strong>Compared to the baseline, FairICP improved prediction accuracy by 7.2% and reduced the accuracy gap between the privileged and unprivileged subpopulations by 2.2% on average across all three datasets.</p><p><strong>Discussion and conclusion: </strong>Our work provides a robust solution to promote trust and transparency in AI-CDSS, fostering equality and equity in healthcare for diverse patient populations. Such post-process methods are critical to enabling a robust framework for AI-CDSS implementation and monitoring for healthcare settings.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":" ","pages":"1299-1309"},"PeriodicalIF":4.6000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12277691/pdf/","citationCount":"0","resultStr":"{\"title\":\"FairICP: identifying biases and increasing transparency at the point of care in post-implementation clinical decision support using inductive conformal prediction.\",\"authors\":\"Xiaotan Sun, Makiya Nakashima, Christopher Nguyen, Po-Hao Chen, W H Wilson Tang, Deborah Kwon, David Chen\",\"doi\":\"10.1093/jamia/ocaf095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Fairness concerns stemming from known and unknown biases in healthcare practices have raised questions about the trustworthiness of Artificial Intelligence (AI)-driven Clinical Decision Support Systems (CDSS). Studies have shown unforeseen performance disparities in subpopulations when applied to clinical settings different from training. Existing unfairness mitigation strategies often struggle with scalability and accessibility, while their pursuit of group-level prediction performance parity does not effectively translate into fairness at the point of care. This study introduces FairICP, a flexible and cost-effective post-implementation framework based on Inductive Conformal Prediction (ICP), to provide users with actionable knowledge of model uncertainty due to subpopulation level biases at the point of care.</p><p><strong>Materials and methods: </strong>FairICP applies ICP to identify the model's scope of competence through group specific calibration, ensuring equitable prediction reliability by filtering predictions that fall within the trusted competence boundaries. We evaluated FairICP against four benchmarks on three medical imaging modalities: (1) Cardiac Magnetic Resonance Imaging (MRI), (2) Chest X-ray and (3) Dermatology Imaging, acquired from both private and large public datasets. Frameworks are assessed on prediction performance enhancement and unfairness mitigation capabilities.</p><p><strong>Results: </strong>Compared to the baseline, FairICP improved prediction accuracy by 7.2% and reduced the accuracy gap between the privileged and unprivileged subpopulations by 2.2% on average across all three datasets.</p><p><strong>Discussion and conclusion: </strong>Our work provides a robust solution to promote trust and transparency in AI-CDSS, fostering equality and equity in healthcare for diverse patient populations. Such post-process methods are critical to enabling a robust framework for AI-CDSS implementation and monitoring for healthcare settings.</p>\",\"PeriodicalId\":50016,\"journal\":{\"name\":\"Journal of the American Medical Informatics Association\",\"volume\":\" \",\"pages\":\"1299-1309\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12277691/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Medical Informatics Association\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1093/jamia/ocaf095\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Medical Informatics Association","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/jamia/ocaf095","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

目的:医疗保健实践中已知和未知的偏见引起的公平性问题引发了对人工智能(AI)驱动的临床决策支持系统(CDSS)的可信度的质疑。研究表明,当应用于不同于训练的临床环境时,亚人群的表现存在不可预见的差异。现有的不公平缓解策略往往难以实现可扩展性和可访问性,而它们对群体级别预测性能平价的追求并不能有效地转化为护理点上的公平。本研究介绍了FairICP,这是一种基于归纳适形预测(ICP)的灵活且具有成本效益的实施后框架,为用户提供可操作的知识,了解由于护理点的亚群体水平偏差导致的模型不确定性。材料和方法:FairICP通过特定群体的校准应用ICP来确定模型的能力范围,通过过滤落在可信能力范围内的预测来确保公平的预测可靠性。我们根据三种医学成像方式的四个基准对FairICP进行了评估:(1)心脏磁共振成像(MRI),(2)胸部x射线和(3)皮肤病成像,这些数据来自私人和大型公共数据集。评估框架的预测性能增强和不公平缓解能力。结果:与基线相比,在所有三个数据集中,FairICP将预测精度提高了7.2%,并将特权亚群和非特权亚群之间的准确性差距平均降低了2.2%。讨论和结论:我们的工作提供了一个强有力的解决方案,以促进AI-CDSS的信任和透明度,促进不同患者群体在医疗保健方面的平等和公平。这种后处理方法对于为医疗保健设置的AI-CDSS实施和监测建立健全的框架至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FairICP: identifying biases and increasing transparency at the point of care in post-implementation clinical decision support using inductive conformal prediction.

Objectives: Fairness concerns stemming from known and unknown biases in healthcare practices have raised questions about the trustworthiness of Artificial Intelligence (AI)-driven Clinical Decision Support Systems (CDSS). Studies have shown unforeseen performance disparities in subpopulations when applied to clinical settings different from training. Existing unfairness mitigation strategies often struggle with scalability and accessibility, while their pursuit of group-level prediction performance parity does not effectively translate into fairness at the point of care. This study introduces FairICP, a flexible and cost-effective post-implementation framework based on Inductive Conformal Prediction (ICP), to provide users with actionable knowledge of model uncertainty due to subpopulation level biases at the point of care.

Materials and methods: FairICP applies ICP to identify the model's scope of competence through group specific calibration, ensuring equitable prediction reliability by filtering predictions that fall within the trusted competence boundaries. We evaluated FairICP against four benchmarks on three medical imaging modalities: (1) Cardiac Magnetic Resonance Imaging (MRI), (2) Chest X-ray and (3) Dermatology Imaging, acquired from both private and large public datasets. Frameworks are assessed on prediction performance enhancement and unfairness mitigation capabilities.

Results: Compared to the baseline, FairICP improved prediction accuracy by 7.2% and reduced the accuracy gap between the privileged and unprivileged subpopulations by 2.2% on average across all three datasets.

Discussion and conclusion: Our work provides a robust solution to promote trust and transparency in AI-CDSS, fostering equality and equity in healthcare for diverse patient populations. Such post-process methods are critical to enabling a robust framework for AI-CDSS implementation and monitoring for healthcare settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of the American Medical Informatics Association
Journal of the American Medical Informatics Association 医学-计算机:跨学科应用
CiteScore
14.50
自引率
7.80%
发文量
230
审稿时长
3-8 weeks
期刊介绍: JAMIA is AMIA''s premier peer-reviewed journal for biomedical and health informatics. Covering the full spectrum of activities in the field, JAMIA includes informatics articles in the areas of clinical care, clinical research, translational science, implementation science, imaging, education, consumer health, public health, and policy. JAMIA''s articles describe innovative informatics research and systems that help to advance biomedical science and to promote health. Case reports, perspectives and reviews also help readers stay connected with the most important informatics developments in implementation, policy and education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信