Can artificial intelligence (AI) educate your patient? A study to assess overall readability and pharmacists' perception of AI-generated patient education materials

IF 1.3 Q4 PHARMACOLOGY & PHARMACY
Drew Armstrong Pharm.D., Caroline Paul B.S., Brent McGlaughlin Pharm.D., David Hill Pharm.D.
{"title":"Can artificial intelligence (AI) educate your patient? A study to assess overall readability and pharmacists' perception of AI-generated patient education materials","authors":"Drew Armstrong Pharm.D.,&nbsp;Caroline Paul B.S.,&nbsp;Brent McGlaughlin Pharm.D.,&nbsp;David Hill Pharm.D.","doi":"10.1002/jac5.2006","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed.</p>\n </section>\n \n <section>\n \n <h3> Objective</h3>\n \n <p>The purpose of this study was to assess the accuracy, readability, and likelihood of using AI-generated patient education materials for ten common medications and disease states.</p>\n </section>\n \n <section>\n \n <h3> Method<b>s</b></h3>\n \n <p>AI (Chat Generative Pre-Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th-grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh-Kincaid readability score.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI-generated materials decreased from 11.65 to 5.87 after reviewing the 6th-grade prompt (<i>p</i> &lt; .001).</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>AI-generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.</p>\n </section>\n </div>","PeriodicalId":73966,"journal":{"name":"Journal of the American College of Clinical Pharmacy : JACCP","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American College of Clinical Pharmacy : JACCP","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jac5.2006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed.

Objective

The purpose of this study was to assess the accuracy, readability, and likelihood of using AI-generated patient education materials for ten common medications and disease states.

Methods

AI (Chat Generative Pre-Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th-grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh-Kincaid readability score.

Results

A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI-generated materials decreased from 11.65 to 5.87 after reviewing the 6th-grade prompt (p < .001).

Conclusion

AI-generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.

人工智能(AI)能教育患者吗?一项评估人工智能生成的患者教育材料的整体可读性和药剂师感知的研究
导言:药剂师在为患者提供安全、准确的疾病和药物教育方面至关重要。人工智能(AI)能够快速生成患者教育材料,从而节省医疗资源。然而,需要对药剂师使用这些材料的整体准确性和舒适度进行评估。 目的 本研究旨在评估人工智能生成的十种常见药物和疾病状态的患者教育材料的准确性、可读性和使用可能性。 方法 使用人工智能(Chat Generative Pre-Trained Transformer [ChatGPT] v3.5)为以下药物或疾病状态创建患者教育材料:阿哌沙班、连续血糖监测(CGM)、饮食疗法治疗高血压(DASH)、依诺肝素、高血压、低血糖、心肌梗死、纳洛酮、塞马鲁肽和华法林。以下提示 "为......编写一份患者教育材料 "被输入到 ChatGPT(OpenAI,加利福尼亚州旧金山)软件中,这些药物或疾病状态位于提示的末尾。然后使用相同的药物和疾病状态完成类似的提示,即 "以六年级或更低的阅读水平为......编写一份患者教育材料"。十位临床药剂师被要求审阅并评估他们审阅每份教育材料、进行临床和语法编辑所花费的时间、他们对材料临床准确性的信心以及他们将这些材料用于患者的可能性。使用弗莱什-金凯德可读性评分法对这些教材的可读性进行了评估。 结果 共有 8 名药剂师完成了两组评审,共评估了 16 份患者教育材料。在药剂师完成的任何评估中,两种提示均无统计学差异。对准确性的总体信心尚可,人工智能生成材料的总体可读性得分在审核了六级提示后从 11.65 分降至 5.87 分(p <.001)。 结论 人工智能生成的患者教育材料在临床实践中大有可为,但进一步验证其临床准确性仍是一项艰巨的任务。重要的是要确保患者教育材料的整体可读性达到适当水平,以提高患者理解的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信