减轻患者伤害风险:医疗保健中人工智能的要求建议

IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Juan M. Garcia-Gomez , Vicent Blanes-Selva , Celia Alvarez Romero , José Carlos de Bartolomé Cenzano , Felipe Pereira Mesquita , Alejandro Pazos , Ascensión Doñate-Martínez
{"title":"减轻患者伤害风险:医疗保健中人工智能的要求建议","authors":"Juan M. Garcia-Gomez ,&nbsp;Vicent Blanes-Selva ,&nbsp;Celia Alvarez Romero ,&nbsp;José Carlos de Bartolomé Cenzano ,&nbsp;Felipe Pereira Mesquita ,&nbsp;Alejandro Pazos ,&nbsp;Ascensión Doñate-Martínez","doi":"10.1016/j.artmed.2025.103168","DOIUrl":null,"url":null,"abstract":"<div><div>With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - <em>Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability</em> -, Transparency - <em>AI passport, eXplainable AI, Data quality assessment, Bias Check</em> -, Traceability - <em>User management, Audit trail, Review of cases</em>-, and Responsibility - <em>Regulation check, Academic use only disclaimer, Clinicians double check</em> -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed <em>explainable AI</em> and <em>data quality assessment</em> essential for transparency; <em>audit trail</em> for traceability; and <em>regulatory compliance</em> and <em>clinician double check</em> for responsibility. Clinicians rated the following requirements more relevant (<em>p</em> &lt; 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (<em>p</em> &lt; 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103168"},"PeriodicalIF":6.2000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating patient harm risks: A proposal of requirements for AI in healthcare\",\"authors\":\"Juan M. Garcia-Gomez ,&nbsp;Vicent Blanes-Selva ,&nbsp;Celia Alvarez Romero ,&nbsp;José Carlos de Bartolomé Cenzano ,&nbsp;Felipe Pereira Mesquita ,&nbsp;Alejandro Pazos ,&nbsp;Ascensión Doñate-Martínez\",\"doi\":\"10.1016/j.artmed.2025.103168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - <em>Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability</em> -, Transparency - <em>AI passport, eXplainable AI, Data quality assessment, Bias Check</em> -, Traceability - <em>User management, Audit trail, Review of cases</em>-, and Responsibility - <em>Regulation check, Academic use only disclaimer, Clinicians double check</em> -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed <em>explainable AI</em> and <em>data quality assessment</em> essential for transparency; <em>audit trail</em> for traceability; and <em>regulatory compliance</em> and <em>clinician double check</em> for responsibility. Clinicians rated the following requirements more relevant (<em>p</em> &lt; 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (<em>p</em> &lt; 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.</div></div>\",\"PeriodicalId\":55458,\"journal\":{\"name\":\"Artificial Intelligence in Medicine\",\"volume\":\"167 \",\"pages\":\"Article 103168\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence in Medicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0933365725001034\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0933365725001034","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能(AI)的兴起,可能需要采取缓解策略,以负责任地整合支持人工智能的医疗软件,确保符合道德规范和患者安全。本研究探讨了如何减轻欧洲议会研究服务处(EPRS)确定的关键风险。为此,我们讨论了互补的风险缓解要求如何确保人工智能在医疗保健领域的主要方面:可靠性-持续性能评估,持续可用性测试,加密和使用现场测试库,语义互操作性-,透明度-人工智能护照,可解释的人工智能,数据质量评估,偏见检查-,可追溯性-用户管理,审计跟踪,案例审查-和责任-监管检查,仅供学术使用的免责声明,临床医生双重检查-。2024年3月至6月期间对216名信通技术医疗专业人员(医生、信通技术工作人员和补充概况)进行的一项调查显示,所有概况都认为这些要求是积极的。应答者认为可解释的人工智能和数据质量评估对透明度至关重要;可追溯性的审计跟踪;以及法规遵从和临床医生的双重检查责任。临床医生认为以下要求更相关(p <;0.05)比技术人员:持续的性能评估、可用性测试、加密、AI通行证、回顾性案例审查和学术使用检查。此外,用户发现人工智能护照与透明度比决策者更相关(p <;0.05)。我们相信,这一建议可以作为一个起点,赋予未来的人工智能系统在医疗实践中的要求,以确保其道德部署。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Mitigating patient harm risks: A proposal of requirements for AI in healthcare
With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability -, Transparency - AI passport, eXplainable AI, Data quality assessment, Bias Check -, Traceability - User management, Audit trail, Review of cases-, and Responsibility - Regulation check, Academic use only disclaimer, Clinicians double check -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed explainable AI and data quality assessment essential for transparency; audit trail for traceability; and regulatory compliance and clinician double check for responsibility. Clinicians rated the following requirements more relevant (p < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (p < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence in Medicine
Artificial Intelligence in Medicine 工程技术-工程:生物医学
CiteScore
15.00
自引率
2.70%
发文量
143
审稿时长
6.3 months
期刊介绍: Artificial Intelligence in Medicine publishes original articles from a wide variety of interdisciplinary perspectives concerning the theory and practice of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care. Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications that aim at supporting decision-based medical tasks through knowledge- and/or data-intensive computer-based solutions that ultimately support and improve the performance of a human care provider.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信