{"title":"The AI-augmented clinician: Are we ready?","authors":"Bernard Dan","doi":"10.1111/dmcn.16291","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) will soon become indispensable across many aspects of health care and across all disciplines. A growing body of research suggests that AI-driven analysis of complex data sets has the potential to enhance diagnostics, optimize management strategies, and improve outcome measurement, ultimately enabling more personalized care.<span><sup>1-3</sup></span> The outlook in the literature has so far largely been optimistic, often describing AI's progress as ‘promising’, although several risks have been highlighted. One major concern is the potential decline in clinical competence if clinicians rely too heavily on AI at the expense of hands-on clinical observation and reasoning. Another significant challenge is the phenomenon known as AI ‘hallucinations’, where AI-generated information appears credible but is, in fact, incorrect or nonsensical. This issue is particularly concerning if clinicians fail to verify AI outputs thoroughly against established clinical guidelines and their own expertise. Similar considerations apply to research and academic publication, which AI is also transforming, while the principles of integrity and human responsibility and accountability remain paramount.<span><sup>4</sup></span></p><p>The anticipated evolution of health care envisions a collaborative model in which human expertise and AI-driven technology work in tandem for the benefit of patients. The prevailing assumption is that AI will empower health care professionals to deliver more accurate, efficient, and personalized care; while human judgment, empathy, and ethical decision-making will continue to play a crucial role in ensuring that technology serves patients effectively. It is widely hypothesized that AI will perform many tasks more efficiently and accurately than unaided humans, yet human decision-making, when informed by AI, is ultimately more reliable and relevant than either alone. Studies using various methodologies have confirmed the first hypothesis, demonstrating that AI technologies can surpass health professionals in certain tasks, including diagnosing complex clinical cases. However, multiple studies have documented that AI alone can significantly outperform clinicians who use AI-assisted tools. For instance, a recent randomized clinical trial examined physicians' diagnostic reasoning on challenging cases, comparing those who used conventional diagnostic resources alone versus those who supplemented their approach with a large language model chatbot (a machine-learning model designed to understand and generate human-like text).<span><sup>5</sup></span> Regardless of the physicians' level of training and experience, AI alone significantly outperformed those using AI as an adjunct.</p><p>These findings should not be interpreted as a call for AI to function autonomously in diagnosis without physician oversight. Instead, they may reflect how clinicians interact with AI tools. Large language models are highly sensitive to user input (prompts). Appropriate prompt engineering is therefore an essential skill. Effective prompts must aim for a clear, realistic goal; be specific; and provide context. Despite the intuitive nature of chatbot interfaces, it is crucial that clinicians receive specific training to utilize them in their practice. This includes understanding AI's underlying principles, refining their approach to AI-assisted decision-making, and gaining ample practical experience. Additionally, ensuring and maintaining the high quality and integrity of the data sets used to train AI models is critical to ensuring their reliability.</p>","PeriodicalId":50587,"journal":{"name":"Developmental Medicine and Child Neurology","volume":"67 5","pages":"554-555"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/dmcn.16291","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Medicine and Child Neurology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/dmcn.16291","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) will soon become indispensable across many aspects of health care and across all disciplines. A growing body of research suggests that AI-driven analysis of complex data sets has the potential to enhance diagnostics, optimize management strategies, and improve outcome measurement, ultimately enabling more personalized care.1-3 The outlook in the literature has so far largely been optimistic, often describing AI's progress as ‘promising’, although several risks have been highlighted. One major concern is the potential decline in clinical competence if clinicians rely too heavily on AI at the expense of hands-on clinical observation and reasoning. Another significant challenge is the phenomenon known as AI ‘hallucinations’, where AI-generated information appears credible but is, in fact, incorrect or nonsensical. This issue is particularly concerning if clinicians fail to verify AI outputs thoroughly against established clinical guidelines and their own expertise. Similar considerations apply to research and academic publication, which AI is also transforming, while the principles of integrity and human responsibility and accountability remain paramount.4
The anticipated evolution of health care envisions a collaborative model in which human expertise and AI-driven technology work in tandem for the benefit of patients. The prevailing assumption is that AI will empower health care professionals to deliver more accurate, efficient, and personalized care; while human judgment, empathy, and ethical decision-making will continue to play a crucial role in ensuring that technology serves patients effectively. It is widely hypothesized that AI will perform many tasks more efficiently and accurately than unaided humans, yet human decision-making, when informed by AI, is ultimately more reliable and relevant than either alone. Studies using various methodologies have confirmed the first hypothesis, demonstrating that AI technologies can surpass health professionals in certain tasks, including diagnosing complex clinical cases. However, multiple studies have documented that AI alone can significantly outperform clinicians who use AI-assisted tools. For instance, a recent randomized clinical trial examined physicians' diagnostic reasoning on challenging cases, comparing those who used conventional diagnostic resources alone versus those who supplemented their approach with a large language model chatbot (a machine-learning model designed to understand and generate human-like text).5 Regardless of the physicians' level of training and experience, AI alone significantly outperformed those using AI as an adjunct.
These findings should not be interpreted as a call for AI to function autonomously in diagnosis without physician oversight. Instead, they may reflect how clinicians interact with AI tools. Large language models are highly sensitive to user input (prompts). Appropriate prompt engineering is therefore an essential skill. Effective prompts must aim for a clear, realistic goal; be specific; and provide context. Despite the intuitive nature of chatbot interfaces, it is crucial that clinicians receive specific training to utilize them in their practice. This includes understanding AI's underlying principles, refining their approach to AI-assisted decision-making, and gaining ample practical experience. Additionally, ensuring and maintaining the high quality and integrity of the data sets used to train AI models is critical to ensuring their reliability.
期刊介绍:
Wiley-Blackwell is pleased to publish Developmental Medicine & Child Neurology (DMCN), a Mac Keith Press publication and official journal of the American Academy for Cerebral Palsy and Developmental Medicine (AACPDM) and the British Paediatric Neurology Association (BPNA).
For over 50 years, DMCN has defined the field of paediatric neurology and neurodisability and is one of the world’s leading journals in the whole field of paediatrics. DMCN disseminates a range of information worldwide to improve the lives of disabled children and their families. The high quality of published articles is maintained by expert review, including independent statistical assessment, before acceptance.