Yilin Ning, Xiaoxuan Liu, Gary S. Collins, Karel G. M. Moons, Melissa McCradden, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Benjamin Alan Goldstein, Siegfried K. Wagner, Pearse A. Keane, Eric J. Topol, Nan Liu
{"title":"An ethics assessment tool for artificial intelligence implementation in healthcare: CARE-AI","authors":"Yilin Ning, Xiaoxuan Liu, Gary S. Collins, Karel G. M. Moons, Melissa McCradden, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Benjamin Alan Goldstein, Siegfried K. Wagner, Pearse A. Keane, Eric J. Topol, Nan Liu","doi":"10.1038/s41591-024-03310-1","DOIUrl":null,"url":null,"abstract":"<p>The deployment of artificial intelligence (AI)-powered prediction models in healthcare can lead to ethical concerns about their implementation and upscaling. For example, AI prediction models can hinder clinical decision-making if they advise different diagnoses or treatments by sex and gender or by race and ethnicity without clear justification. Recent guidance (such as the WHO guidance on ethics and governance of AI for health and the Dutch guideline on AI for healthcare) and legislation (such as the European Union AI Act and the White House Executive Order on Safe, Secure, and Trustworthy Development and Use of AI in United States) have outlined important principles for the implementation of AI, including ethical considerations<sup>1,2</sup>. Health systems have responded by establishing governance committees and processes to ensure the safe and equitable implementation of AI tools<sup>3</sup>. However, there is currently no assessment tool that can identify and mitigate ethical issues during the implementation of AI prediction models in healthcare practice, including for public health.</p><p>The development and validation of AI prediction models has benefited from detailed reporting and risk-of-bias tools, such as TRIPOD+AI<sup>4</sup> and PROBAST (with its forthcoming AI extension) for fairness and bias control and CLAIM<sup>5</sup> for data privacy, security and interpretability of AI imaging studies. However, when planning the implementation of a rigorously developed and well-performing AI prediction model in healthcare practice, existing recommendations and guidance on ethics are sparse and lack operational detail. For example, the DECIDE-AI reporting guideline<sup>6</sup> contains a small number of ethics-related recommendations for early clinical evaluation of AI concerning equity, safety and human-AI interaction, and FUTURE-AI<sup>7</sup> provides recommendations based on six principles (fairness, universality, traceability, usability, robustness and explainability) in model design, development, validation and deployment. A bioethics-centric delivery science toolkit for responsible AI implementation in healthcare is needed<sup>8</sup>.</p>","PeriodicalId":19037,"journal":{"name":"Nature Medicine","volume":null,"pages":null},"PeriodicalIF":58.7000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41591-024-03310-1","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
The deployment of artificial intelligence (AI)-powered prediction models in healthcare can lead to ethical concerns about their implementation and upscaling. For example, AI prediction models can hinder clinical decision-making if they advise different diagnoses or treatments by sex and gender or by race and ethnicity without clear justification. Recent guidance (such as the WHO guidance on ethics and governance of AI for health and the Dutch guideline on AI for healthcare) and legislation (such as the European Union AI Act and the White House Executive Order on Safe, Secure, and Trustworthy Development and Use of AI in United States) have outlined important principles for the implementation of AI, including ethical considerations1,2. Health systems have responded by establishing governance committees and processes to ensure the safe and equitable implementation of AI tools3. However, there is currently no assessment tool that can identify and mitigate ethical issues during the implementation of AI prediction models in healthcare practice, including for public health.
The development and validation of AI prediction models has benefited from detailed reporting and risk-of-bias tools, such as TRIPOD+AI4 and PROBAST (with its forthcoming AI extension) for fairness and bias control and CLAIM5 for data privacy, security and interpretability of AI imaging studies. However, when planning the implementation of a rigorously developed and well-performing AI prediction model in healthcare practice, existing recommendations and guidance on ethics are sparse and lack operational detail. For example, the DECIDE-AI reporting guideline6 contains a small number of ethics-related recommendations for early clinical evaluation of AI concerning equity, safety and human-AI interaction, and FUTURE-AI7 provides recommendations based on six principles (fairness, universality, traceability, usability, robustness and explainability) in model design, development, validation and deployment. A bioethics-centric delivery science toolkit for responsible AI implementation in healthcare is needed8.
期刊介绍:
Nature Medicine is a monthly journal publishing original peer-reviewed research in all areas of medicine. The publication focuses on originality, timeliness, interdisciplinary interest, and the impact on improving human health. In addition to research articles, Nature Medicine also publishes commissioned content such as News, Reviews, and Perspectives. This content aims to provide context for the latest advances in translational and clinical research, reaching a wide audience of M.D. and Ph.D. readers. All editorial decisions for the journal are made by a team of full-time professional editors.
Nature Medicine consider all types of clinical research, including:
-Case-reports and small case series
-Clinical trials, whether phase 1, 2, 3 or 4
-Observational studies
-Meta-analyses
-Biomarker studies
-Public and global health studies
Nature Medicine is also committed to facilitating communication between translational and clinical researchers. As such, we consider “hybrid” studies with preclinical and translational findings reported alongside data from clinical studies.