Juan M. Garcia-Gomez , Vicent Blanes-Selva , Celia Alvarez Romero , José Carlos de Bartolomé Cenzano , Felipe Pereira Mesquita , Alejandro Pazos , Ascensión Doñate-Martínez
{"title":"减轻患者伤害风险:医疗保健中人工智能的要求建议","authors":"Juan M. Garcia-Gomez , Vicent Blanes-Selva , Celia Alvarez Romero , José Carlos de Bartolomé Cenzano , Felipe Pereira Mesquita , Alejandro Pazos , Ascensión Doñate-Martínez","doi":"10.1016/j.artmed.2025.103168","DOIUrl":null,"url":null,"abstract":"<div><div>With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - <em>Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability</em> -, Transparency - <em>AI passport, eXplainable AI, Data quality assessment, Bias Check</em> -, Traceability - <em>User management, Audit trail, Review of cases</em>-, and Responsibility - <em>Regulation check, Academic use only disclaimer, Clinicians double check</em> -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed <em>explainable AI</em> and <em>data quality assessment</em> essential for transparency; <em>audit trail</em> for traceability; and <em>regulatory compliance</em> and <em>clinician double check</em> for responsibility. Clinicians rated the following requirements more relevant (<em>p</em> < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (<em>p</em> < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"167 ","pages":"Article 103168"},"PeriodicalIF":6.2000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating patient harm risks: A proposal of requirements for AI in healthcare\",\"authors\":\"Juan M. Garcia-Gomez , Vicent Blanes-Selva , Celia Alvarez Romero , José Carlos de Bartolomé Cenzano , Felipe Pereira Mesquita , Alejandro Pazos , Ascensión Doñate-Martínez\",\"doi\":\"10.1016/j.artmed.2025.103168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - <em>Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability</em> -, Transparency - <em>AI passport, eXplainable AI, Data quality assessment, Bias Check</em> -, Traceability - <em>User management, Audit trail, Review of cases</em>-, and Responsibility - <em>Regulation check, Academic use only disclaimer, Clinicians double check</em> -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed <em>explainable AI</em> and <em>data quality assessment</em> essential for transparency; <em>audit trail</em> for traceability; and <em>regulatory compliance</em> and <em>clinician double check</em> for responsibility. Clinicians rated the following requirements more relevant (<em>p</em> < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (<em>p</em> < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.</div></div>\",\"PeriodicalId\":55458,\"journal\":{\"name\":\"Artificial Intelligence in Medicine\",\"volume\":\"167 \",\"pages\":\"Article 103168\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence in Medicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0933365725001034\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence in Medicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0933365725001034","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Mitigating patient harm risks: A proposal of requirements for AI in healthcare
With the rise Artificial Intelligence (AI), mitigation strategies may be needed to integrate AI-enabled medical software responsibly, ensuring ethical alignment and patient safety. This study examines how to mitigate the key risks identified by the European Parliamentary Research Service (EPRS). For that, we discuss how complementary risk-mitigation requirements may ensure the main aspects of AI in Healthcare: Reliability - Continuous performance evaluation, Continuous usability test, Encryption and use of field-tested libraries, Semantic interoperability -, Transparency - AI passport, eXplainable AI, Data quality assessment, Bias Check -, Traceability - User management, Audit trail, Review of cases-, and Responsibility - Regulation check, Academic use only disclaimer, Clinicians double check -. A survey conducted among 216 Medical ICT professionals (medical doctors, ICT staff and complementary profiles) between March and June 2024 revealed these requirements were perceived positive by all profiles. Responders deemed explainable AI and data quality assessment essential for transparency; audit trail for traceability; and regulatory compliance and clinician double check for responsibility. Clinicians rated the following requirements more relevant (p < 0.05) than technicians: continuous performance assessment, usability testing, encryption, AI passport, retrospective case review, and academic use check. Additionally, users found the AI passport more relevant for transparency than decision-makers (p < 0.05). We trust that this proposal can serve as a starting point to endow the future AI systems in medical practice with requirements to ensure their ethical deployment.
期刊介绍:
Artificial Intelligence in Medicine publishes original articles from a wide variety of interdisciplinary perspectives concerning the theory and practice of artificial intelligence (AI) in medicine, medically-oriented human biology, and health care.
Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications that aim at supporting decision-based medical tasks through knowledge- and/or data-intensive computer-based solutions that ultimately support and improve the performance of a human care provider.