研究先进的大型语言模型在生成患者指南和患者教育材料方面的能力。

IF 1.6 4区 医学 Q3 PHARMACOLOGY & PHARMACY
Kannan Sridharan, Gowri Sivaramakrishnan
{"title":"研究先进的大型语言模型在生成患者指南和患者教育材料方面的能力。","authors":"Kannan Sridharan, Gowri Sivaramakrishnan","doi":"10.1136/ejhpharm-2024-004245","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).</p><p><strong>Methods: </strong>A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.</p><p><strong>Results: </strong>LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.</p><p><strong>Conclusion: </strong>LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.</p>","PeriodicalId":12050,"journal":{"name":"European journal of hospital pharmacy : science and practice","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the capabilities of advanced large language models in generating patient instructions and patient educational material.\",\"authors\":\"Kannan Sridharan, Gowri Sivaramakrishnan\",\"doi\":\"10.1136/ejhpharm-2024-004245\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).</p><p><strong>Methods: </strong>A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.</p><p><strong>Results: </strong>LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.</p><p><strong>Conclusion: </strong>LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.</p>\",\"PeriodicalId\":12050,\"journal\":{\"name\":\"European journal of hospital pharmacy : science and practice\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European journal of hospital pharmacy : science and practice\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1136/ejhpharm-2024-004245\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European journal of hospital pharmacy : science and practice","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/ejhpharm-2024-004245","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
引用次数: 0

摘要

目的:具有高级语言生成功能的大型语言模型 (LLM) 有可能增强与患者的互动。本研究评估了 ChatGPT 4.0 和 Gemini 1.0 Pro 在提供患者指导和创建患者教育材料(PEM)方面的有效性:一项横向研究采用 ChatGPT 4.0 和 Gemini 1.0 Pro,在六个医疗场景中使用简单和详细的提示。用于印刷材料的患者教育材料评估工具(PEMAT-P)对输出结果的可理解性、可操作性和可读性进行了评估:结果:LLMs 提供了一致的回答,尤其是关于药物信息、治疗目标、用药、常见副作用和相互作用的回答。然而,他们缺乏有关有效期和正确药物处置的指导。详细的提示为普通成人提供了可理解的输出。ChatGPT 4.0 的平均可理解度和可操作性得分分别为 80% 和 60%,而 Gemini 1.0 Pro 的平均可理解度和可操作性得分分别为 67% 和 60%。ChatGPT 4.0 的输出更长,详细提示的可读性达到 85%,而 Gemini 1.0 Pro 则保持了一致的可读性。通过简单提示,ChatGPT 4.0 的输出达到了 10 年级的阅读水平,而 Gemini 1.0 Pro 的输出则达到了 7 年级的水平。在详细提示下,两种 LLM 的输出结果均为六年级水平:结论:LLM 在生成患者指南和 PEM 方面大有可为。然而,医护人员的监督和患者对 LLM 使用的教育对有效实施至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Investigating the capabilities of advanced large language models in generating patient instructions and patient educational material.

Objectives: Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).

Methods: A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.

Results: LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.

Conclusion: LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.40
自引率
5.90%
发文量
104
审稿时长
6-12 weeks
期刊介绍: European Journal of Hospital Pharmacy (EJHP) offers a high quality, peer-reviewed platform for the publication of practical and innovative research which aims to strengthen the profile and professional status of hospital pharmacists. EJHP is committed to being the leading journal on all aspects of hospital pharmacy, thereby advancing the science, practice and profession of hospital pharmacy. The journal aims to become a major source for education and inspiration to improve practice and the standard of patient care in hospitals and related institutions worldwide. EJHP is the only official journal of the European Association of Hospital Pharmacists.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信