Assessing the readability of dermatological patient information leaflets generated by ChatGPT-4 and its associated plugins.

Q3 Medicine
Skin health and disease Pub Date : 2025-01-20 eCollection Date: 2025-02-01 DOI:10.1093/skinhd/vzae015
Dominik Todorov, Jae Yong Park, James Andrew Ng Hing Cheung, Eleni Avramidou, Dushyanth Gnanappiragasam
{"title":"Assessing the readability of dermatological patient information leaflets generated by ChatGPT-4 and its associated plugins.","authors":"Dominik Todorov, Jae Yong Park, James Andrew Ng Hing Cheung, Eleni Avramidou, Dushyanth Gnanappiragasam","doi":"10.1093/skinhd/vzae015","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>In the UK, 43% of adults struggle to understand health information presented in standard formats. As a result, Health Education England recommends that patient information leaflets (PILs) be written at a readability level appropriate for an 11-year-old.</p><p><strong>Objectives: </strong>To evaluate the ability of ChatGPT-4 and its three dermatology-specific plugins to generate PILs that meet readability recommendations and compare their readability with existing British Association of Dermatologists (BAD) PILs.</p><p><strong>Methods: </strong>ChatGPT-4 and its three plugins were used to generate PILs for 10 preselected dermatological conditions. The readability of these PILs was assessed using three readability formulas Simple Measure of Gobbledygook (SMOG), Flesch Reading Ease Test (FRET) and Flesch-Kincaid Grade Level Test (FKGLT) and compared against the readability of BAD PILs. A one-way ANOVA was conducted to identify any significant differences.</p><p><strong>Results: </strong>The readability scores of PILs generated by ChatGPT-4 and its plugins did not meet the recommended target range. However, some of these PILs demonstrated more favourable mean readability scores compared with those from the BAD, with certain plugins, such as Chat with a Dermatologist, showing significant differences in mean SMOG (<i>P</i> = 0.0005) and mean FKGLT (<i>P</i> = 0.002) scores. Nevertheless, the PILs generated by ChatGPT-4 were found to lack some of the content typically included in BAD PILs.</p><p><strong>Conclusions: </strong>ChatGPT-4 can produce dermatological PILs free from misleading information, occasionally surpassing BAD PILs in terms of readability. However, these PILs still fall short of being easily understood by the general public, and the content requires rigorous verification by healthcare professionals to ensure reliability and quality.</p>","PeriodicalId":74804,"journal":{"name":"Skin health and disease","volume":"5 1","pages":"14-21"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11924364/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Skin health and disease","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/skinhd/vzae015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Background: In the UK, 43% of adults struggle to understand health information presented in standard formats. As a result, Health Education England recommends that patient information leaflets (PILs) be written at a readability level appropriate for an 11-year-old.

Objectives: To evaluate the ability of ChatGPT-4 and its three dermatology-specific plugins to generate PILs that meet readability recommendations and compare their readability with existing British Association of Dermatologists (BAD) PILs.

Methods: ChatGPT-4 and its three plugins were used to generate PILs for 10 preselected dermatological conditions. The readability of these PILs was assessed using three readability formulas Simple Measure of Gobbledygook (SMOG), Flesch Reading Ease Test (FRET) and Flesch-Kincaid Grade Level Test (FKGLT) and compared against the readability of BAD PILs. A one-way ANOVA was conducted to identify any significant differences.

Results: The readability scores of PILs generated by ChatGPT-4 and its plugins did not meet the recommended target range. However, some of these PILs demonstrated more favourable mean readability scores compared with those from the BAD, with certain plugins, such as Chat with a Dermatologist, showing significant differences in mean SMOG (P = 0.0005) and mean FKGLT (P = 0.002) scores. Nevertheless, the PILs generated by ChatGPT-4 were found to lack some of the content typically included in BAD PILs.

Conclusions: ChatGPT-4 can produce dermatological PILs free from misleading information, occasionally surpassing BAD PILs in terms of readability. However, these PILs still fall short of being easily understood by the general public, and the content requires rigorous verification by healthcare professionals to ensure reliability and quality.

评估ChatGPT-4及其相关插件生成的皮肤病患者信息传单的可读性。
背景:在英国,43%的成年人难以理解以标准格式呈现的健康信息。因此,英格兰健康教育建议病人信息传单(PILs)应以适合11岁儿童阅读的水平编写。目的:评估ChatGPT-4及其三个皮肤科特定插件生成符合可读性建议的PILs的能力,并将其可读性与现有的英国皮肤科医师协会(BAD) PILs进行比较。方法:使用ChatGPT-4及其3个插件生成10种预先选择的皮肤疾病的PILs。采用三种可读性公式(Simple Measure of Gobbledygook, SMOG)、Flesch Reading Ease Test (FRET)和Flesch- kincaid Grade Level Test, FKGLT)评估这些药品的可读性,并与不良药品的可读性进行比较。进行单因素方差分析以确定任何显著差异。结果:ChatGPT-4及其插件生成的pil的可读性得分未达到推荐的目标范围。然而,与来自BAD的插件(如Chat with a Dermatologist)相比,其中一些pill的平均可读性得分更高,在平均SMOG (P = 0.0005)和平均FKGLT (P = 0.002)得分上显示出显著差异。然而,ChatGPT-4生成的pil被发现缺乏一些通常包含在BAD pil中的内容。结论:ChatGPT-4可以制作没有误导性信息的皮肤病PILs,在可读性方面偶尔超过BAD PILs。然而,这些药品清单仍不容易被公众理解,其内容需要医疗保健专业人员进行严格验证,以确保可靠性和质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.70
自引率
0.00%
发文量
0
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信