如何用人工智能做出“好的”决策?在儿科护理接地理论研究。

IF 9 3区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Melissa D McCradden, Kelly Thai, Azadeh Assadi, Sana Tonekaboni, Ian Stedman, Shalmali Joshi, Minfan Zhang, Fanny Chevalier, Anna Goldenberg
{"title":"如何用人工智能做出“好的”决策?在儿科护理接地理论研究。","authors":"Melissa D McCradden, Kelly Thai, Azadeh Assadi, Sana Tonekaboni, Ian Stedman, Shalmali Joshi, Minfan Zhang, Fanny Chevalier, Anna Goldenberg","doi":"10.1136/bmjebm-2024-112919","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To develop a framework for good clinical decision-making using machine learning (ML) models for interventional, patient-level decisions.</p><p><strong>Design: </strong>Grounded theory qualitative interview study.</p><p><strong>Setting: </strong>Primarily single-site at a major urban academic paediatric hospital, with external sampling.</p><p><strong>Participants: </strong>Sixteen participants representing physicians (n=10), nursing (n=3), respiratory therapists (n=2) and an ML specialist (n=1) with experience working in acute care environments were identified through purposive sampling. Individuals were recruited to represent a spectrum of ML knowledge (three expert, four knowledgeable and nine non-expert) and years of experience (median=12.9 years postgraduation). Recruitment proceeded through snowball sampling, with individuals approached to represent a diversity of fields, levels of experience and attitudes towards artificial intelligence (AI)/ML. A member check step and consultation with patients was undertaken to vet the framework, which resulted in some minor revisions to the wording and framing.</p><p><strong>Interventions: </strong>A semi-structured virtual interview simulating an intensive care unit handover for a hypothetical patient case using a simulated ML model and seven visualisations using known methods addressing interpretability of models in healthcare. Participants were asked to make an initial care plan for the patient, then were presented with a model prediction followed by the seven visualisations to explore their judgement and potential influence and understanding of the visualisations. Two visualisations contained contradicting information to probe participants' resolution process for the contrasting information. The ethical justifiability and clinical reasoning process were explored.</p><p><strong>Main outcome: </strong>A comprehensive framework was developed that is grounded in established medicolegal and ethical standards and accounts for the incorporation of inference from ML models.</p><p><strong>Results: </strong>We found that for making good decisions, participants reflected across six main categories: evidence, facts and medical knowledge relevant to the patient's condition; how that knowledge may be applied to this particular patient; patient-level, family-specific and local factors; facts about the model, its development and testing; the patient-level knowledge sufficiently represented by the model; the model's incorporation of relevant contextual factors. This judgement was centred on and anchored most heavily on the overall balance of benefits and risks to the patient, framed by the goals of care. We found evidence of automation bias, with many participants assuming that if the model's explanation conflicted with their prior knowledge that their judgement was incorrect; others concluded the exact opposite, drawing from their medical knowledge base to reject the incorrect information provided in the explanation. Regarding knowledge about the model, we found that participants most consistently wanted to know about the model's historical performance in the cohort of patients in their local unit where the hypothetical patient was situated.</p><p><strong>Conclusion: </strong>Good decisions using AI tools require reflection across multiple domains. We provide an actionable framework and question guide to support clinical decision-making with AI.</p>","PeriodicalId":9059,"journal":{"name":"BMJ Evidence-Based Medicine","volume":" ","pages":"183-193"},"PeriodicalIF":9.0000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What makes a 'good' decision with artificial intelligence? A grounded theory study in paediatric care.\",\"authors\":\"Melissa D McCradden, Kelly Thai, Azadeh Assadi, Sana Tonekaboni, Ian Stedman, Shalmali Joshi, Minfan Zhang, Fanny Chevalier, Anna Goldenberg\",\"doi\":\"10.1136/bmjebm-2024-112919\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To develop a framework for good clinical decision-making using machine learning (ML) models for interventional, patient-level decisions.</p><p><strong>Design: </strong>Grounded theory qualitative interview study.</p><p><strong>Setting: </strong>Primarily single-site at a major urban academic paediatric hospital, with external sampling.</p><p><strong>Participants: </strong>Sixteen participants representing physicians (n=10), nursing (n=3), respiratory therapists (n=2) and an ML specialist (n=1) with experience working in acute care environments were identified through purposive sampling. Individuals were recruited to represent a spectrum of ML knowledge (three expert, four knowledgeable and nine non-expert) and years of experience (median=12.9 years postgraduation). Recruitment proceeded through snowball sampling, with individuals approached to represent a diversity of fields, levels of experience and attitudes towards artificial intelligence (AI)/ML. A member check step and consultation with patients was undertaken to vet the framework, which resulted in some minor revisions to the wording and framing.</p><p><strong>Interventions: </strong>A semi-structured virtual interview simulating an intensive care unit handover for a hypothetical patient case using a simulated ML model and seven visualisations using known methods addressing interpretability of models in healthcare. Participants were asked to make an initial care plan for the patient, then were presented with a model prediction followed by the seven visualisations to explore their judgement and potential influence and understanding of the visualisations. Two visualisations contained contradicting information to probe participants' resolution process for the contrasting information. The ethical justifiability and clinical reasoning process were explored.</p><p><strong>Main outcome: </strong>A comprehensive framework was developed that is grounded in established medicolegal and ethical standards and accounts for the incorporation of inference from ML models.</p><p><strong>Results: </strong>We found that for making good decisions, participants reflected across six main categories: evidence, facts and medical knowledge relevant to the patient's condition; how that knowledge may be applied to this particular patient; patient-level, family-specific and local factors; facts about the model, its development and testing; the patient-level knowledge sufficiently represented by the model; the model's incorporation of relevant contextual factors. This judgement was centred on and anchored most heavily on the overall balance of benefits and risks to the patient, framed by the goals of care. We found evidence of automation bias, with many participants assuming that if the model's explanation conflicted with their prior knowledge that their judgement was incorrect; others concluded the exact opposite, drawing from their medical knowledge base to reject the incorrect information provided in the explanation. Regarding knowledge about the model, we found that participants most consistently wanted to know about the model's historical performance in the cohort of patients in their local unit where the hypothetical patient was situated.</p><p><strong>Conclusion: </strong>Good decisions using AI tools require reflection across multiple domains. We provide an actionable framework and question guide to support clinical decision-making with AI.</p>\",\"PeriodicalId\":9059,\"journal\":{\"name\":\"BMJ Evidence-Based Medicine\",\"volume\":\" \",\"pages\":\"183-193\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2025-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMJ Evidence-Based Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1136/bmjebm-2024-112919\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Evidence-Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/bmjebm-2024-112919","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

摘要

目的:利用机器学习(ML)模型开发一个良好的临床决策框架,用于介入,患者层面的决策。设计:扎根理论定性访谈研究。环境:主要是在一个主要的城市学术儿科医院的单一站点,有外部抽样。参与者:16名参与者代表医生(n=10),护理(n=3),呼吸治疗师(n=2)和ML专家(n=1),通过有目的的抽样确定在急性护理环境中工作的经验。被招募的个人代表了机器学习知识(3名专家,4名知识渊博者和9名非专家)和经验(毕业后中位数=12.9年)的范围。招聘通过滚雪球抽样进行,每个人都代表不同的领域、经验水平和对人工智能(AI)/ML的态度。成员检查步骤和咨询患者进行审查框架,这导致了一些小的修改措辞和框架。干预措施:一个半结构化的虚拟访谈模拟重症监护病房的交接,使用模拟的ML模型和七个可视化使用已知的方法解决医疗保健模型的可解释性。参与者被要求为病人制定一个初步的护理计划,然后向他们展示一个模型预测,然后是七个可视化图像,以探索他们对可视化图像的判断、潜在影响和理解。两个可视化图像包含矛盾的信息,以探索参与者对对比信息的解决过程。探讨了伦理正当性和临床推理过程。主要成果:开发了一个全面的框架,该框架以已建立的医学和伦理标准为基础,并考虑了机器学习模型的推断。结果:我们发现,为了做出好的决定,参与者反映了六个主要类别:与患者病情相关的证据、事实和医学知识;如何将这些知识应用到这个病人身上;患者层面、家庭特定和地方因素;关于模型、开发和测试的事实;由模型充分表示的患者层面的知识;该模型纳入了相关的语境因素。这一判断主要集中在对患者的利益和风险的总体平衡上,并以护理目标为框架。我们发现了自动化偏差的证据,许多参与者认为,如果模型的解释与他们的先验知识相冲突,他们的判断是不正确的;另一些人则得出了完全相反的结论,他们从自己的医学知识基础出发,拒绝了解释中提供的不正确信息。关于模型的知识,我们发现参与者最一致地想知道模型在假设患者所在的当地单位的患者队列中的历史表现。结论:使用人工智能工具做出好的决策需要跨多个领域的反思。我们提供了一个可操作的框架和问题指南,以支持人工智能的临床决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
What makes a 'good' decision with artificial intelligence? A grounded theory study in paediatric care.

Objective: To develop a framework for good clinical decision-making using machine learning (ML) models for interventional, patient-level decisions.

Design: Grounded theory qualitative interview study.

Setting: Primarily single-site at a major urban academic paediatric hospital, with external sampling.

Participants: Sixteen participants representing physicians (n=10), nursing (n=3), respiratory therapists (n=2) and an ML specialist (n=1) with experience working in acute care environments were identified through purposive sampling. Individuals were recruited to represent a spectrum of ML knowledge (three expert, four knowledgeable and nine non-expert) and years of experience (median=12.9 years postgraduation). Recruitment proceeded through snowball sampling, with individuals approached to represent a diversity of fields, levels of experience and attitudes towards artificial intelligence (AI)/ML. A member check step and consultation with patients was undertaken to vet the framework, which resulted in some minor revisions to the wording and framing.

Interventions: A semi-structured virtual interview simulating an intensive care unit handover for a hypothetical patient case using a simulated ML model and seven visualisations using known methods addressing interpretability of models in healthcare. Participants were asked to make an initial care plan for the patient, then were presented with a model prediction followed by the seven visualisations to explore their judgement and potential influence and understanding of the visualisations. Two visualisations contained contradicting information to probe participants' resolution process for the contrasting information. The ethical justifiability and clinical reasoning process were explored.

Main outcome: A comprehensive framework was developed that is grounded in established medicolegal and ethical standards and accounts for the incorporation of inference from ML models.

Results: We found that for making good decisions, participants reflected across six main categories: evidence, facts and medical knowledge relevant to the patient's condition; how that knowledge may be applied to this particular patient; patient-level, family-specific and local factors; facts about the model, its development and testing; the patient-level knowledge sufficiently represented by the model; the model's incorporation of relevant contextual factors. This judgement was centred on and anchored most heavily on the overall balance of benefits and risks to the patient, framed by the goals of care. We found evidence of automation bias, with many participants assuming that if the model's explanation conflicted with their prior knowledge that their judgement was incorrect; others concluded the exact opposite, drawing from their medical knowledge base to reject the incorrect information provided in the explanation. Regarding knowledge about the model, we found that participants most consistently wanted to know about the model's historical performance in the cohort of patients in their local unit where the hypothetical patient was situated.

Conclusion: Good decisions using AI tools require reflection across multiple domains. We provide an actionable framework and question guide to support clinical decision-making with AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMJ Evidence-Based Medicine
BMJ Evidence-Based Medicine MEDICINE, GENERAL & INTERNAL-
CiteScore
8.90
自引率
3.40%
发文量
48
期刊介绍: BMJ Evidence-Based Medicine (BMJ EBM) publishes original evidence-based research, insights and opinions on what matters for health care. We focus on the tools, methods, and concepts that are basic and central to practising evidence-based medicine and deliver relevant, trustworthy and impactful evidence. BMJ EBM is a Plan S compliant Transformative Journal and adheres to the highest possible industry standards for editorial policies and publication ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信