Elena Bignami , Luigino Jalale Darhour , Wolfgang Buhre , Maurizio Cecconi , Valentina Bellini
{"title":"Artificial intelligence in healthcare: Tailoring education to meet EU AI-Act standards","authors":"Elena Bignami , Luigino Jalale Darhour , Wolfgang Buhre , Maurizio Cecconi , Valentina Bellini","doi":"10.1016/j.hlpt.2025.101078","DOIUrl":null,"url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) in Intensive Care Units (ICUs) has the potential to transform critical care by enhancing diagnosis, management, and clinical decision-making. Generative and Predictive AI technologies offer new opportunities for personalized care and risk stratification, but their implementation must prioritize ethical standards, patient safety, and the sustainability of care delivery. With the EU AI-Act entering into force in February 2025, a structured and responsible adoption of AI is now imperative. This article outlines a strategic framework for ICU AI integration, emphasizing the importance of a formal declaration of intent by each unit, detailing current AI-use, implementation plans, and governance strategies. Central to this approach is the development of tailored AI education programs adapted to four distinct professional profiles, ranging from experienced clinicians with limited AI knowledge to new intensivists with strong AI backgrounds but limited clinical experience. Training must foster critical thinking, contextual interpretation, and a balanced relationship between AI tools and human judgment. A multidisciplinary support team should oversee ethical AI-use and continuous performance monitoring. Ultimately, aligning regulatory compliance with targeted education and practical implementation could enable a safe, effective, and ethically grounded use of AI in intensive care. This balanced approach would support a culture of transparency and accountability, while preserving the central role of human clinical reasoning and improving the overall quality of ICU care.</div></div>","PeriodicalId":48672,"journal":{"name":"Health Policy and Technology","volume":"14 6","pages":"Article 101078"},"PeriodicalIF":3.7000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Policy and Technology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2211883725001066","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH POLICY & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of Artificial Intelligence (AI) in Intensive Care Units (ICUs) has the potential to transform critical care by enhancing diagnosis, management, and clinical decision-making. Generative and Predictive AI technologies offer new opportunities for personalized care and risk stratification, but their implementation must prioritize ethical standards, patient safety, and the sustainability of care delivery. With the EU AI-Act entering into force in February 2025, a structured and responsible adoption of AI is now imperative. This article outlines a strategic framework for ICU AI integration, emphasizing the importance of a formal declaration of intent by each unit, detailing current AI-use, implementation plans, and governance strategies. Central to this approach is the development of tailored AI education programs adapted to four distinct professional profiles, ranging from experienced clinicians with limited AI knowledge to new intensivists with strong AI backgrounds but limited clinical experience. Training must foster critical thinking, contextual interpretation, and a balanced relationship between AI tools and human judgment. A multidisciplinary support team should oversee ethical AI-use and continuous performance monitoring. Ultimately, aligning regulatory compliance with targeted education and practical implementation could enable a safe, effective, and ethically grounded use of AI in intensive care. This balanced approach would support a culture of transparency and accountability, while preserving the central role of human clinical reasoning and improving the overall quality of ICU care.
期刊介绍:
Health Policy and Technology (HPT), is the official journal of the Fellowship of Postgraduate Medicine (FPM), a cross-disciplinary journal, which focuses on past, present and future health policy and the role of technology in clinical and non-clinical national and international health environments.
HPT provides a further excellent way for the FPM to continue to make important national and international contributions to development of policy and practice within medicine and related disciplines. The aim of HPT is to publish relevant, timely and accessible articles and commentaries to support policy-makers, health professionals, health technology providers, patient groups and academia interested in health policy and technology.
Topics covered by HPT will include:
- Health technology, including drug discovery, diagnostics, medicines, devices, therapeutic delivery and eHealth systems
- Cross-national comparisons on health policy using evidence-based approaches
- National studies on health policy to determine the outcomes of technology-driven initiatives
- Cross-border eHealth including health tourism
- The digital divide in mobility, access and affordability of healthcare
- Health technology assessment (HTA) methods and tools for evaluating the effectiveness of clinical and non-clinical health technologies
- Health and eHealth indicators and benchmarks (measure/metrics) for understanding the adoption and diffusion of health technologies
- Health and eHealth models and frameworks to support policy-makers and other stakeholders in decision-making
- Stakeholder engagement with health technologies (clinical and patient/citizen buy-in)
- Regulation and health economics