Bat-Zion Hose, Jessica L Handley, Joshua Biro, Sahithi Reddy, Seth Krevat, Aaron Zachary Hettinger, Raj M Ratwani
{"title":"Development of a Preliminary Patient Safety Classification System for Generative AI.","authors":"Bat-Zion Hose, Jessica L Handley, Joshua Biro, Sahithi Reddy, Seth Krevat, Aaron Zachary Hettinger, Raj M Ratwani","doi":"10.1136/bmjqs-2024-017918","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (AI) technologies have the potential to revolutionise healthcare delivery but require classification and monitoring of patient safety risks. To address this need, we developed and evaluated a preliminary classification system for categorising generative AI patient safety errors. Our classification system is organised around two AI system stages (input and output) with specific error types by stage. We applied our classification system to two generative AI applications to assess its effectiveness in categorising safety issues: patient-facing conversational large language models (LLMs) and an ambient digital scribe (ADS) system for clinical documentation. In the LLM analysis, we identified 45 errors across 27 patient medical queries, with omission being the most common (42% of errors). Of the identified errors, 50% were categorised as low clinical significance, 25% as moderate clinical significance and 25% as high clinical significance. Similarly, in the ADS simulation, we identified 66 errors across 11 patient visits, with omission being the most common (83% of errors). Of the identified errors, 55% were categorised as low clinical significance and 45% were categorised as moderate clinical significance. These findings demonstrate the classification system's utility in categorising output errors from two different AI healthcare applications, providing a starting point for developing a robust process to better understand AI-enabled errors.</p>","PeriodicalId":9077,"journal":{"name":"BMJ Quality & Safety","volume":" ","pages":""},"PeriodicalIF":5.6000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Quality & Safety","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/bmjqs-2024-017918","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Generative artificial intelligence (AI) technologies have the potential to revolutionise healthcare delivery but require classification and monitoring of patient safety risks. To address this need, we developed and evaluated a preliminary classification system for categorising generative AI patient safety errors. Our classification system is organised around two AI system stages (input and output) with specific error types by stage. We applied our classification system to two generative AI applications to assess its effectiveness in categorising safety issues: patient-facing conversational large language models (LLMs) and an ambient digital scribe (ADS) system for clinical documentation. In the LLM analysis, we identified 45 errors across 27 patient medical queries, with omission being the most common (42% of errors). Of the identified errors, 50% were categorised as low clinical significance, 25% as moderate clinical significance and 25% as high clinical significance. Similarly, in the ADS simulation, we identified 66 errors across 11 patient visits, with omission being the most common (83% of errors). Of the identified errors, 55% were categorised as low clinical significance and 45% were categorised as moderate clinical significance. These findings demonstrate the classification system's utility in categorising output errors from two different AI healthcare applications, providing a starting point for developing a robust process to better understand AI-enabled errors.
期刊介绍:
BMJ Quality & Safety (previously Quality & Safety in Health Care) is an international peer review publication providing research, opinions, debates and reviews for academics, clinicians and healthcare managers focused on the quality and safety of health care and the science of improvement.
The journal receives approximately 1000 manuscripts a year and has an acceptance rate for original research of 12%. Time from submission to first decision averages 22 days and accepted articles are typically published online within 20 days. Its current impact factor is 3.281.