Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management.

AMIA ... Annual Symposium proceedings. AMIA Symposium Pub Date : 2024-10-21 eCollection Date: 2023-01-01
Maria Kritharidou, Georgios Chrysogonidis, Tasos Ventouris, Vaios Tsarapastsanis, Danai Aristeridou, Anastasia Karatzia, Veena Calambur, Ahsan Huda, Sabrina Hsueh
{"title":"Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management.","authors":"Maria Kritharidou, Georgios Chrysogonidis, Tasos Ventouris, Vaios Tsarapastsanis, Danai Aristeridou, Anastasia Karatzia, Veena Calambur, Ahsan Huda, Sabrina Hsueh","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The increasing torrents of health AI innovations hold promise for facilitating the delivery of patient-centered care. Yet the enablement and adoption of AI innovations in the healthcare and life science industries can be challenging with the rising concerns of AI risks and the potential harms to health equity. This paper describes Ethicara, a system that enables health AI risk assessment for responsible AI model development. Ethicara works by orchestrating a collection of self-analytics services that detect and mitigate bias and increase model transparency from harmonized data models. For the lack of risk controls currently in the health AI development and deployment process, the self-analytics tools enhanced by Ethicara are expected to provide repeatable and measurable controls to operationalize voluntary risk management frameworks and guidelines (e.g., NIST RMF, FDA GMLP) and regulatory requirements emerging from the upcoming AI regulations (e.g., EU AI Act, US Blueprint for an AI Bill of Rights). In addition, Ethicara provides plug-ins via which analytics results are incorporated into healthcare applications. This paper provides an overview of Ethicara's architecture, pipeline, and technical components and showcases the system's capability to facilitate responsible AI use, and exemplifies the types of AI risk controls it enables in the healthcare and life science industry.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"2023-2032"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11492113/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AMIA ... Annual Symposium proceedings. AMIA Symposium","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The increasing torrents of health AI innovations hold promise for facilitating the delivery of patient-centered care. Yet the enablement and adoption of AI innovations in the healthcare and life science industries can be challenging with the rising concerns of AI risks and the potential harms to health equity. This paper describes Ethicara, a system that enables health AI risk assessment for responsible AI model development. Ethicara works by orchestrating a collection of self-analytics services that detect and mitigate bias and increase model transparency from harmonized data models. For the lack of risk controls currently in the health AI development and deployment process, the self-analytics tools enhanced by Ethicara are expected to provide repeatable and measurable controls to operationalize voluntary risk management frameworks and guidelines (e.g., NIST RMF, FDA GMLP) and regulatory requirements emerging from the upcoming AI regulations (e.g., EU AI Act, US Blueprint for an AI Bill of Rights). In addition, Ethicara provides plug-ins via which analytics results are incorporated into healthcare applications. This paper provides an overview of Ethicara's architecture, pipeline, and technical components and showcases the system's capability to facilitate responsible AI use, and exemplifies the types of AI risk controls it enables in the healthcare and life science industry.

Ethicara for Responsible AI in Healthcare:偏差检测和人工智能风险管理系统。
越来越多的医疗人工智能创新有望促进以患者为中心的医疗服务。然而,随着人们对人工智能风险和对健康公平的潜在危害的日益关注,在医疗保健和生命科学行业中启用和采用人工智能创新技术可能会面临挑战。本文介绍了 Ethicara 系统,该系统可为负责任的人工智能模型开发提供健康人工智能风险评估。Ethicara 的工作原理是协调一系列自我分析服务,从统一的数据模型中检测和减少偏差,提高模型的透明度。由于目前在健康人工智能开发和部署过程中缺乏风险控制,Ethicara 增强的自我分析工具有望提供可重复和可衡量的控制措施,以落实自愿风险管理框架和指南(如 NIST RMF、FDA GMLP)以及即将出台的人工智能法规(如欧盟人工智能法案、美国人工智能权利法案蓝图)中的监管要求。此外,Ethicara 还提供插件,可将分析结果纳入医疗保健应用程序。本文概述了 Ethicara 的架构、管道和技术组件,展示了该系统促进负责任地使用人工智能的能力,并举例说明了它在医疗保健和生命科学行业实现的人工智能风险控制类型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信