{"title":"医疗保健中的人工智能政策:用于结构化实施的基于检查清单的方法。","authors":"Elena Bignami, Luigino Jalale Darhour, Gabriele Franco, Matteo Guarnieri, Valentina Bellini","doi":"10.1186/s44158-025-00278-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial Intelligence (AI) is transforming anaesthesia and intensive care medicine, enhancing diagnostic precision, workflow efficiency, and patient safety. However, deploying AI in high-acuity environments involves regulatory, ethical, and operational challenges. The European Union Artificial Intelligence Act (AI Act), effective 2025, imposes binding obligations on healthcare organizations, creating an urgent need for structured, governance-focused AI policies. This work presents a checklist-based methodology for responsible, safe, ethical, and regulation-aligned AI adoption in clinical units.</p><p><strong>The need for a methodology to develop an ai policy: </strong>Effective AI policies must ensure transparency, safety, fairness, and regulatory compliance while remaining adaptable to rapid technological and legislative changes. The proposed methodology employs a domain-specific checklist to generate critical evaluative questions, enabling healthcare professionals to systematically assess AI systems' appropriateness, reliability, and legal implications without relying on rigid, quickly outdated prescriptive rules.</p><p><strong>The ai act and its relevance: </strong>Regulation (EU) 2024/1689 establishes the first comprehensive AI legal framework, introducing risk-based classification, imposing stringent requirements for high-risk AI, often including medical devices. Compliance obligations extend to both AI-system providers and deployers, making operational compliance instruments and AI literacy programmes essential for lawful implementation.</p><p><strong>Ai literacy: </strong>OBLIGATION AND PLANNING: From February 2025, the AI Act mandates AI literacy for all personnel interacting with AI-systems. Training should cover baseline competencies for all staff, advanced modules for specialists, continuous professional development, and integration of ethical, legal, and governance principles. Competency acquisition and updates must be systematically documented to meet institutional and EU compliance standards.</p><p><strong>Operational checklist for the adoption of ai policy: </strong>The checklist has two integrated domains: clinical and technical validation, including evidence-based performance assessment, real-world validation, MDR compliance, GDPR adherence, and post-deployment monitoring; and governance and compliance, covering AI Act conformity, organizational accountability, decision traceability, human oversight, AI literacy, and structured audit and update mechanisms.</p><p><strong>Future perspectives: </strong>The checklist methodology offers a scalable, adaptable, regulation-ready framework for AI policy development. By embedding legal compliance, clinical safety, governance, and continuous staff training, it supports sustainable AI integration. Future updates will incorporate regulatory changes, real-world feedback, and impact metrics, enhancing AI's contribution to quality, safety, and equity in patient care.</p>","PeriodicalId":73597,"journal":{"name":"Journal of Anesthesia, Analgesia and Critical Care (Online)","volume":"5 1","pages":"56"},"PeriodicalIF":3.1000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12465464/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI policy in healthcare: a checklist-based methodology for structured implementation.\",\"authors\":\"Elena Bignami, Luigino Jalale Darhour, Gabriele Franco, Matteo Guarnieri, Valentina Bellini\",\"doi\":\"10.1186/s44158-025-00278-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Artificial Intelligence (AI) is transforming anaesthesia and intensive care medicine, enhancing diagnostic precision, workflow efficiency, and patient safety. However, deploying AI in high-acuity environments involves regulatory, ethical, and operational challenges. The European Union Artificial Intelligence Act (AI Act), effective 2025, imposes binding obligations on healthcare organizations, creating an urgent need for structured, governance-focused AI policies. This work presents a checklist-based methodology for responsible, safe, ethical, and regulation-aligned AI adoption in clinical units.</p><p><strong>The need for a methodology to develop an ai policy: </strong>Effective AI policies must ensure transparency, safety, fairness, and regulatory compliance while remaining adaptable to rapid technological and legislative changes. The proposed methodology employs a domain-specific checklist to generate critical evaluative questions, enabling healthcare professionals to systematically assess AI systems' appropriateness, reliability, and legal implications without relying on rigid, quickly outdated prescriptive rules.</p><p><strong>The ai act and its relevance: </strong>Regulation (EU) 2024/1689 establishes the first comprehensive AI legal framework, introducing risk-based classification, imposing stringent requirements for high-risk AI, often including medical devices. Compliance obligations extend to both AI-system providers and deployers, making operational compliance instruments and AI literacy programmes essential for lawful implementation.</p><p><strong>Ai literacy: </strong>OBLIGATION AND PLANNING: From February 2025, the AI Act mandates AI literacy for all personnel interacting with AI-systems. Training should cover baseline competencies for all staff, advanced modules for specialists, continuous professional development, and integration of ethical, legal, and governance principles. Competency acquisition and updates must be systematically documented to meet institutional and EU compliance standards.</p><p><strong>Operational checklist for the adoption of ai policy: </strong>The checklist has two integrated domains: clinical and technical validation, including evidence-based performance assessment, real-world validation, MDR compliance, GDPR adherence, and post-deployment monitoring; and governance and compliance, covering AI Act conformity, organizational accountability, decision traceability, human oversight, AI literacy, and structured audit and update mechanisms.</p><p><strong>Future perspectives: </strong>The checklist methodology offers a scalable, adaptable, regulation-ready framework for AI policy development. By embedding legal compliance, clinical safety, governance, and continuous staff training, it supports sustainable AI integration. Future updates will incorporate regulatory changes, real-world feedback, and impact metrics, enhancing AI's contribution to quality, safety, and equity in patient care.</p>\",\"PeriodicalId\":73597,\"journal\":{\"name\":\"Journal of Anesthesia, Analgesia and Critical Care (Online)\",\"volume\":\"5 1\",\"pages\":\"56\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12465464/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Anesthesia, Analgesia and Critical Care (Online)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s44158-025-00278-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Anesthesia, Analgesia and Critical Care (Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s44158-025-00278-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AI policy in healthcare: a checklist-based methodology for structured implementation.
Introduction: Artificial Intelligence (AI) is transforming anaesthesia and intensive care medicine, enhancing diagnostic precision, workflow efficiency, and patient safety. However, deploying AI in high-acuity environments involves regulatory, ethical, and operational challenges. The European Union Artificial Intelligence Act (AI Act), effective 2025, imposes binding obligations on healthcare organizations, creating an urgent need for structured, governance-focused AI policies. This work presents a checklist-based methodology for responsible, safe, ethical, and regulation-aligned AI adoption in clinical units.
The need for a methodology to develop an ai policy: Effective AI policies must ensure transparency, safety, fairness, and regulatory compliance while remaining adaptable to rapid technological and legislative changes. The proposed methodology employs a domain-specific checklist to generate critical evaluative questions, enabling healthcare professionals to systematically assess AI systems' appropriateness, reliability, and legal implications without relying on rigid, quickly outdated prescriptive rules.
The ai act and its relevance: Regulation (EU) 2024/1689 establishes the first comprehensive AI legal framework, introducing risk-based classification, imposing stringent requirements for high-risk AI, often including medical devices. Compliance obligations extend to both AI-system providers and deployers, making operational compliance instruments and AI literacy programmes essential for lawful implementation.
Ai literacy: OBLIGATION AND PLANNING: From February 2025, the AI Act mandates AI literacy for all personnel interacting with AI-systems. Training should cover baseline competencies for all staff, advanced modules for specialists, continuous professional development, and integration of ethical, legal, and governance principles. Competency acquisition and updates must be systematically documented to meet institutional and EU compliance standards.
Operational checklist for the adoption of ai policy: The checklist has two integrated domains: clinical and technical validation, including evidence-based performance assessment, real-world validation, MDR compliance, GDPR adherence, and post-deployment monitoring; and governance and compliance, covering AI Act conformity, organizational accountability, decision traceability, human oversight, AI literacy, and structured audit and update mechanisms.
Future perspectives: The checklist methodology offers a scalable, adaptable, regulation-ready framework for AI policy development. By embedding legal compliance, clinical safety, governance, and continuous staff training, it supports sustainable AI integration. Future updates will incorporate regulatory changes, real-world feedback, and impact metrics, enhancing AI's contribution to quality, safety, and equity in patient care.