{"title":"Conversational AI with large language models to increase the uptake of clinical guidance","authors":"Gloria Macia , Alison Liddell , Vincent Doyle","doi":"10.1016/j.ceh.2024.12.001","DOIUrl":null,"url":null,"abstract":"<div><div>The rise of large language models (LLMs) and conversational applications, like ChatGPT, prompts Health Technology Assessment (HTA) bodies, such as NICE, to rethink how healthcare professionals access clinical guidance. Integrating LLMs into systems like Retrieval-Augmented Generation (RAG) offers potential solutions to current LLMs’ problems, like the generation of false or misleading information. The objective of this paper is to design and debate the value of an AI-driven system, similar to ChatGPT, to enhance the uptake of clinical guidance within the National Health Service (NHS) of the UK. Conversational interfaces, powered by LLMs, offer healthcare practitioners clear benefits over traditional ways of getting clinical guidance, such as easy navigation through long documents, blending information from various trusted sources, or expediting evidence-based decisions in situ. But, putting these interfaces into practice brings new challenges for HTA bodies, like assuring quality, addressing data privacy concerns, navigating existing resource constraints, or preparing the organization for innovative practices. Rigorous empirical evaluations are necessary to validate their effectiveness in increasing the uptake of clinical guidance among healthcare practitioners. A feasible evaluation strategy is elucidated in this research while its implementation remains as future work.</div></div>","PeriodicalId":100268,"journal":{"name":"Clinical eHealth","volume":"7 ","pages":"Pages 147-152"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical eHealth","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2588914124000145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The rise of large language models (LLMs) and conversational applications, like ChatGPT, prompts Health Technology Assessment (HTA) bodies, such as NICE, to rethink how healthcare professionals access clinical guidance. Integrating LLMs into systems like Retrieval-Augmented Generation (RAG) offers potential solutions to current LLMs’ problems, like the generation of false or misleading information. The objective of this paper is to design and debate the value of an AI-driven system, similar to ChatGPT, to enhance the uptake of clinical guidance within the National Health Service (NHS) of the UK. Conversational interfaces, powered by LLMs, offer healthcare practitioners clear benefits over traditional ways of getting clinical guidance, such as easy navigation through long documents, blending information from various trusted sources, or expediting evidence-based decisions in situ. But, putting these interfaces into practice brings new challenges for HTA bodies, like assuring quality, addressing data privacy concerns, navigating existing resource constraints, or preparing the organization for innovative practices. Rigorous empirical evaluations are necessary to validate their effectiveness in increasing the uptake of clinical guidance among healthcare practitioners. A feasible evaluation strategy is elucidated in this research while its implementation remains as future work.