Development of a Preliminary Patient Safety Classification System for Generative AI.

IF 5.6 1区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Bat-Zion Hose, Jessica L Handley, Joshua Biro, Sahithi Reddy, Seth Krevat, Aaron Zachary Hettinger, Raj M Ratwani
{"title":"Development of a Preliminary Patient Safety Classification System for Generative AI.","authors":"Bat-Zion Hose, Jessica L Handley, Joshua Biro, Sahithi Reddy, Seth Krevat, Aaron Zachary Hettinger, Raj M Ratwani","doi":"10.1136/bmjqs-2024-017918","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (AI) technologies have the potential to revolutionise healthcare delivery but require classification and monitoring of patient safety risks. To address this need, we developed and evaluated a preliminary classification system for categorising generative AI patient safety errors. Our classification system is organised around two AI system stages (input and output) with specific error types by stage. We applied our classification system to two generative AI applications to assess its effectiveness in categorising safety issues: patient-facing conversational large language models (LLMs) and an ambient digital scribe (ADS) system for clinical documentation. In the LLM analysis, we identified 45 errors across 27 patient medical queries, with omission being the most common (42% of errors). Of the identified errors, 50% were categorised as low clinical significance, 25% as moderate clinical significance and 25% as high clinical significance. Similarly, in the ADS simulation, we identified 66 errors across 11 patient visits, with omission being the most common (83% of errors). Of the identified errors, 55% were categorised as low clinical significance and 45% were categorised as moderate clinical significance. These findings demonstrate the classification system's utility in categorising output errors from two different AI healthcare applications, providing a starting point for developing a robust process to better understand AI-enabled errors.</p>","PeriodicalId":9077,"journal":{"name":"BMJ Quality & Safety","volume":" ","pages":"130-132"},"PeriodicalIF":5.6000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Quality & Safety","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/bmjqs-2024-017918","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence (AI) technologies have the potential to revolutionise healthcare delivery but require classification and monitoring of patient safety risks. To address this need, we developed and evaluated a preliminary classification system for categorising generative AI patient safety errors. Our classification system is organised around two AI system stages (input and output) with specific error types by stage. We applied our classification system to two generative AI applications to assess its effectiveness in categorising safety issues: patient-facing conversational large language models (LLMs) and an ambient digital scribe (ADS) system for clinical documentation. In the LLM analysis, we identified 45 errors across 27 patient medical queries, with omission being the most common (42% of errors). Of the identified errors, 50% were categorised as low clinical significance, 25% as moderate clinical significance and 25% as high clinical significance. Similarly, in the ADS simulation, we identified 66 errors across 11 patient visits, with omission being the most common (83% of errors). Of the identified errors, 55% were categorised as low clinical significance and 45% were categorised as moderate clinical significance. These findings demonstrate the classification system's utility in categorising output errors from two different AI healthcare applications, providing a starting point for developing a robust process to better understand AI-enabled errors.

生成式人工智能患者安全初步分类系统的开发。
生成式人工智能(AI)技术有可能彻底改变医疗保健服务,但需要对患者安全风险进行分类和监测。为了满足这一需求,我们开发并评估了一个初步分类系统,用于对生成人工智能患者安全错误进行分类。我们的分类系统是围绕两个AI系统阶段(输入和输出)组织的,每个阶段都有特定的错误类型。我们将我们的分类系统应用于两个生成式人工智能应用程序,以评估其在安全问题分类方面的有效性:面向患者的会话大型语言模型(llm)和用于临床文档的环境数字记录(ADS)系统。在法学硕士分析中,我们在27例患者医疗查询中发现了45个错误,其中最常见的是遗漏(42%的错误)。在已确定的错误中,50%为低临床意义,25%为中度临床意义,25%为高临床意义。同样,在ADS模拟中,我们在11例患者就诊中确定了66个错误,其中遗漏是最常见的(83%的错误)。在已确定的错误中,55%被归类为低临床意义,45%被归类为中度临床意义。这些发现证明了分类系统在对两种不同的人工智能医疗保健应用程序的输出错误进行分类方面的实用性,为开发一个健壮的流程以更好地理解人工智能导致的错误提供了一个起点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
BMJ Quality & Safety
BMJ Quality & Safety HEALTH CARE SCIENCES & SERVICES-
CiteScore
9.80
自引率
7.40%
发文量
104
审稿时长
4-8 weeks
期刊介绍: BMJ Quality & Safety (previously Quality & Safety in Health Care) is an international peer review publication providing research, opinions, debates and reviews for academics, clinicians and healthcare managers focused on the quality and safety of health care and the science of improvement. The journal receives approximately 1000 manuscripts a year and has an acceptance rate for original research of 12%. Time from submission to first decision averages 22 days and accepted articles are typically published online within 20 days. Its current impact factor is 3.281.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信