Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions

Q3 Medicine
George Abdelmalek MD , Harjot Uppal MBA , Daniel Garcia BS , Joseph Farshchian MD , Arash Emami MD , Andrew McGinniss MD
{"title":"Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions","authors":"George Abdelmalek MD ,&nbsp;Harjot Uppal MBA ,&nbsp;Daniel Garcia BS ,&nbsp;Joseph Farshchian MD ,&nbsp;Arash Emami MD ,&nbsp;Andrew McGinniss MD","doi":"10.1016/j.jhsg.2024.10.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>Many adults in the United States possess basic or below basic health literacy skills, making it essential for patient education materials (PEMs) to be presented at or below a sixth-grade reading level. We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions.</div></div><div><h3>Methods</h3><div>We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. Two consistent questions were asked to minimize variability: 1. “Please explain [Condition] to a patient at a sixth-grade reading level, including details on anatomy, symptoms, doctors' examination, and treatment (both surgical and nonsurgical).” 2. “Create a detailed patient information sheet for the general patient population at a sixth-grade reading level explaining [Condition], including points such as anatomy, symptoms, physical examination, and treatment (both surgical and nonsurgical).” Before asking the second question, a priming phase was conducted where ChatGPT 3.5 and 4.0 were presented with a text sample written at a sixth-grade reading level and informed that this was the desired output level. Multiple readability tests were used to evaluate the output, with a consensus reading level created from the results of all eight readability scores. Statistical analyses were performed using SAS 9.4.</div></div><div><h3>Results</h3><div>ChatGPT 4.0 successfully produced 28% of its responses at the appropriate reading level following the priming phase, compared to none by ChatGPT 3.5. ChatGPT 4.0 showed superior performance across all readability metrics.</div></div><div><h3>Conclusions</h3><div>ChatGPT 4.0 is a more effective tool than ChatGPT 3.5 for generating PEMs at a sixth-grade reading level for common hand conditions.</div></div><div><h3>Clinical relevance</h3><div>The results suggest that Artificial Intelligence could significantly enhance patient education and health literacy with further refinement.</div></div>","PeriodicalId":36920,"journal":{"name":"Journal of Hand Surgery Global Online","volume":"7 1","pages":"Pages 37-40"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Hand Surgery Global Online","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589514124001956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Many adults in the United States possess basic or below basic health literacy skills, making it essential for patient education materials (PEMs) to be presented at or below a sixth-grade reading level. We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions.

Methods

We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. Two consistent questions were asked to minimize variability: 1. “Please explain [Condition] to a patient at a sixth-grade reading level, including details on anatomy, symptoms, doctors' examination, and treatment (both surgical and nonsurgical).” 2. “Create a detailed patient information sheet for the general patient population at a sixth-grade reading level explaining [Condition], including points such as anatomy, symptoms, physical examination, and treatment (both surgical and nonsurgical).” Before asking the second question, a priming phase was conducted where ChatGPT 3.5 and 4.0 were presented with a text sample written at a sixth-grade reading level and informed that this was the desired output level. Multiple readability tests were used to evaluate the output, with a consensus reading level created from the results of all eight readability scores. Statistical analyses were performed using SAS 9.4.

Results

ChatGPT 4.0 successfully produced 28% of its responses at the appropriate reading level following the priming phase, compared to none by ChatGPT 3.5. ChatGPT 4.0 showed superior performance across all readability metrics.

Conclusions

ChatGPT 4.0 is a more effective tool than ChatGPT 3.5 for generating PEMs at a sixth-grade reading level for common hand conditions.

Clinical relevance

The results suggest that Artificial Intelligence could significantly enhance patient education and health literacy with further refinement.
利用ChatGPT制作常见手部疾病的患者教育材料
在美国,许多成年人拥有基本或低于基本的健康素养技能,因此患者教育材料(PEMs)必须达到或低于六年级的阅读水平。我们评估了ChatGPT 3.5和4.0生成的PEMs在常见手部条件下的可读性。方法采用聊天生成预训练变压器(ChatGPT) 3.5和4.0对50种常见手部病变生成PEMs。两个一致的问题被问到最小化可变性:1。“请以六年级阅读水平向病人解释[病情],包括解剖、症状、医生检查和治疗(手术和非手术)的细节。”“2。“为普通患者群体制作一份详细的患者信息表,以六年级阅读水平解释[病情],包括解剖、症状、体检和治疗(手术和非手术)等要点。”在提出第二个问题之前,进行了启动阶段,其中向ChatGPT 3.5和4.0展示了以六年级阅读水平编写的文本样本,并告知这是期望的输出水平。使用多个可读性测试来评估输出,并根据所有八个可读性分数的结果创建一致的阅读水平。采用SAS 9.4进行统计学分析。结果在启动阶段后,ChatGPT 4.0在适当的阅读水平下成功产生了28%的反应,而ChatGPT 3.5则没有。ChatGPT 4.0在所有可读性指标上都表现出卓越的性能。结论ChatGPT 4.0比ChatGPT 3.5更有效地生成六年级阅读水平的PEMs。结果表明,人工智能可以显著提高患者教育水平和健康素养。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.10
自引率
0.00%
发文量
111
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信