The AI-augmented clinician: Are we ready?

IF 3.8 2区 医学 Q1 CLINICAL NEUROLOGY
Bernard Dan
{"title":"The AI-augmented clinician: Are we ready?","authors":"Bernard Dan","doi":"10.1111/dmcn.16291","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) will soon become indispensable across many aspects of health care and across all disciplines. A growing body of research suggests that AI-driven analysis of complex data sets has the potential to enhance diagnostics, optimize management strategies, and improve outcome measurement, ultimately enabling more personalized care.<span><sup>1-3</sup></span> The outlook in the literature has so far largely been optimistic, often describing AI's progress as ‘promising’, although several risks have been highlighted. One major concern is the potential decline in clinical competence if clinicians rely too heavily on AI at the expense of hands-on clinical observation and reasoning. Another significant challenge is the phenomenon known as AI ‘hallucinations’, where AI-generated information appears credible but is, in fact, incorrect or nonsensical. This issue is particularly concerning if clinicians fail to verify AI outputs thoroughly against established clinical guidelines and their own expertise. Similar considerations apply to research and academic publication, which AI is also transforming, while the principles of integrity and human responsibility and accountability remain paramount.<span><sup>4</sup></span></p><p>The anticipated evolution of health care envisions a collaborative model in which human expertise and AI-driven technology work in tandem for the benefit of patients. The prevailing assumption is that AI will empower health care professionals to deliver more accurate, efficient, and personalized care; while human judgment, empathy, and ethical decision-making will continue to play a crucial role in ensuring that technology serves patients effectively. It is widely hypothesized that AI will perform many tasks more efficiently and accurately than unaided humans, yet human decision-making, when informed by AI, is ultimately more reliable and relevant than either alone. Studies using various methodologies have confirmed the first hypothesis, demonstrating that AI technologies can surpass health professionals in certain tasks, including diagnosing complex clinical cases. However, multiple studies have documented that AI alone can significantly outperform clinicians who use AI-assisted tools. For instance, a recent randomized clinical trial examined physicians' diagnostic reasoning on challenging cases, comparing those who used conventional diagnostic resources alone versus those who supplemented their approach with a large language model chatbot (a machine-learning model designed to understand and generate human-like text).<span><sup>5</sup></span> Regardless of the physicians' level of training and experience, AI alone significantly outperformed those using AI as an adjunct.</p><p>These findings should not be interpreted as a call for AI to function autonomously in diagnosis without physician oversight. Instead, they may reflect how clinicians interact with AI tools. Large language models are highly sensitive to user input (prompts). Appropriate prompt engineering is therefore an essential skill. Effective prompts must aim for a clear, realistic goal; be specific; and provide context. Despite the intuitive nature of chatbot interfaces, it is crucial that clinicians receive specific training to utilize them in their practice. This includes understanding AI's underlying principles, refining their approach to AI-assisted decision-making, and gaining ample practical experience. Additionally, ensuring and maintaining the high quality and integrity of the data sets used to train AI models is critical to ensuring their reliability.</p>","PeriodicalId":50587,"journal":{"name":"Developmental Medicine and Child Neurology","volume":"67 5","pages":"554-555"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dmcn.16291","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Medicine and Child Neurology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/dmcn.16291","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) will soon become indispensable across many aspects of health care and across all disciplines. A growing body of research suggests that AI-driven analysis of complex data sets has the potential to enhance diagnostics, optimize management strategies, and improve outcome measurement, ultimately enabling more personalized care.1-3 The outlook in the literature has so far largely been optimistic, often describing AI's progress as ‘promising’, although several risks have been highlighted. One major concern is the potential decline in clinical competence if clinicians rely too heavily on AI at the expense of hands-on clinical observation and reasoning. Another significant challenge is the phenomenon known as AI ‘hallucinations’, where AI-generated information appears credible but is, in fact, incorrect or nonsensical. This issue is particularly concerning if clinicians fail to verify AI outputs thoroughly against established clinical guidelines and their own expertise. Similar considerations apply to research and academic publication, which AI is also transforming, while the principles of integrity and human responsibility and accountability remain paramount.4

The anticipated evolution of health care envisions a collaborative model in which human expertise and AI-driven technology work in tandem for the benefit of patients. The prevailing assumption is that AI will empower health care professionals to deliver more accurate, efficient, and personalized care; while human judgment, empathy, and ethical decision-making will continue to play a crucial role in ensuring that technology serves patients effectively. It is widely hypothesized that AI will perform many tasks more efficiently and accurately than unaided humans, yet human decision-making, when informed by AI, is ultimately more reliable and relevant than either alone. Studies using various methodologies have confirmed the first hypothesis, demonstrating that AI technologies can surpass health professionals in certain tasks, including diagnosing complex clinical cases. However, multiple studies have documented that AI alone can significantly outperform clinicians who use AI-assisted tools. For instance, a recent randomized clinical trial examined physicians' diagnostic reasoning on challenging cases, comparing those who used conventional diagnostic resources alone versus those who supplemented their approach with a large language model chatbot (a machine-learning model designed to understand and generate human-like text).5 Regardless of the physicians' level of training and experience, AI alone significantly outperformed those using AI as an adjunct.

These findings should not be interpreted as a call for AI to function autonomously in diagnosis without physician oversight. Instead, they may reflect how clinicians interact with AI tools. Large language models are highly sensitive to user input (prompts). Appropriate prompt engineering is therefore an essential skill. Effective prompts must aim for a clear, realistic goal; be specific; and provide context. Despite the intuitive nature of chatbot interfaces, it is crucial that clinicians receive specific training to utilize them in their practice. This includes understanding AI's underlying principles, refining their approach to AI-assisted decision-making, and gaining ample practical experience. Additionally, ensuring and maintaining the high quality and integrity of the data sets used to train AI models is critical to ensuring their reliability.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.80
自引率
13.20%
发文量
338
审稿时长
3-6 weeks
期刊介绍: Wiley-Blackwell is pleased to publish Developmental Medicine & Child Neurology (DMCN), a Mac Keith Press publication and official journal of the American Academy for Cerebral Palsy and Developmental Medicine (AACPDM) and the British Paediatric Neurology Association (BPNA). For over 50 years, DMCN has defined the field of paediatric neurology and neurodisability and is one of the world’s leading journals in the whole field of paediatrics. DMCN disseminates a range of information worldwide to improve the lives of disabled children and their families. The high quality of published articles is maintained by expert review, including independent statistical assessment, before acceptance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信