Improving Patient Understanding of Glomerular Disease Terms With ChatGPT

IF 2.2 4区 医学 Q2 MEDICINE, GENERAL & INTERNAL
Yasir H. Abdelgadir, Charat Thongprayoon, Iasmina M. Craici, Wisit Cheungpasitporn, Jing Miao
{"title":"Improving Patient Understanding of Glomerular Disease Terms With ChatGPT","authors":"Yasir H. Abdelgadir,&nbsp;Charat Thongprayoon,&nbsp;Iasmina M. Craici,&nbsp;Wisit Cheungpasitporn,&nbsp;Jing Miao","doi":"10.1155/ijcp/9977290","DOIUrl":null,"url":null,"abstract":"<div>\n <p><b>Background:</b> Glomerular disease is complex and difficult for patients to understand, as it involves various pathophysiology, immunology, and pharmacology areas.</p>\n <p><b>Objective:</b> This study explored whether ChatGPT can maintain accuracy while simplifying glomerular disease terms to enhance patient comprehension.</p>\n <p><b>Methods:</b> 67 terms related to glomerular disease were analyzed using GPT-4 through two distinct queries. One aimed at a general explanation and another tailored for patients with an education level of 8th grade or lower. GPT-4’s accuracy was scored from 1 (incorrect) to 5 (correct and comprehensive). Its readability was assessed using the Consensus Reading Grade (CRG) Level, which incorporates seven readability indices including the Flesch–Kincaid Grade (FKG) and SMOG indices. Flesch Reading Ease (FRE) score, ranging from 0 to 100 with higher scores indicating easier-to-read text, was also used to evaluate the readability. A paired <i>t</i>-test was conducted to assess differences in accuracy and readability levels between different queries.</p>\n <p><b>Results:</b> GPT-4’s general explanations of glomerular disease terms averaged at a college readability level, indicated by the CRG score of 14.1 and FKG score of 13.9. SMOG index also indicated the topic’s complexity, with a score of 11.8. When tailored for patients at or below an 8<sup>th</sup>-grade reading level, readability improved, averaging 9.7 by the CRG score, 8.7 by FKG score, and 7.3 by SMOG score. The FRE score also indicated a further improvement of readability from 31.6 for general explanations to 63.5 for tailored explanations. However, the accuracy in GPT-4’s tailored explanations was significantly lower than that in general explanations (4.2 ± 0.4 versus 4.7 ± 0.3, <i>p</i> &lt; 0.0001).</p>\n <p><b>Conclusion:</b> While GPT-4 effectively simplified information about glomerular diseases, it compromised its accuracy in the process. To implement these findings, we suggest pilot studies in clinical settings to assess patient understanding, using feedback from diverse groups to customize content, expanding research to enhance AI accuracy and reduce biases, setting strict ethical guidelines for AI in healthcare, and integrating with health informatics systems to provide tailored educational content to patients. This approach will promote effective and ethical use of AI tools like ChatGPT in patient education, empowering patients to make informed health decisions.</p>\n </div>","PeriodicalId":13782,"journal":{"name":"International Journal of Clinical Practice","volume":"2025 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/ijcp/9977290","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Clinical Practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/ijcp/9977290","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Glomerular disease is complex and difficult for patients to understand, as it involves various pathophysiology, immunology, and pharmacology areas.

Objective: This study explored whether ChatGPT can maintain accuracy while simplifying glomerular disease terms to enhance patient comprehension.

Methods: 67 terms related to glomerular disease were analyzed using GPT-4 through two distinct queries. One aimed at a general explanation and another tailored for patients with an education level of 8th grade or lower. GPT-4’s accuracy was scored from 1 (incorrect) to 5 (correct and comprehensive). Its readability was assessed using the Consensus Reading Grade (CRG) Level, which incorporates seven readability indices including the Flesch–Kincaid Grade (FKG) and SMOG indices. Flesch Reading Ease (FRE) score, ranging from 0 to 100 with higher scores indicating easier-to-read text, was also used to evaluate the readability. A paired t-test was conducted to assess differences in accuracy and readability levels between different queries.

Results: GPT-4’s general explanations of glomerular disease terms averaged at a college readability level, indicated by the CRG score of 14.1 and FKG score of 13.9. SMOG index also indicated the topic’s complexity, with a score of 11.8. When tailored for patients at or below an 8th-grade reading level, readability improved, averaging 9.7 by the CRG score, 8.7 by FKG score, and 7.3 by SMOG score. The FRE score also indicated a further improvement of readability from 31.6 for general explanations to 63.5 for tailored explanations. However, the accuracy in GPT-4’s tailored explanations was significantly lower than that in general explanations (4.2 ± 0.4 versus 4.7 ± 0.3, p < 0.0001).

Conclusion: While GPT-4 effectively simplified information about glomerular diseases, it compromised its accuracy in the process. To implement these findings, we suggest pilot studies in clinical settings to assess patient understanding, using feedback from diverse groups to customize content, expanding research to enhance AI accuracy and reduce biases, setting strict ethical guidelines for AI in healthcare, and integrating with health informatics systems to provide tailored educational content to patients. This approach will promote effective and ethical use of AI tools like ChatGPT in patient education, empowering patients to make informed health decisions.

Abstract Image

通过ChatGPT提高患者对肾小球疾病术语的理解
背景:肾小球疾病复杂,患者难以理解,涉及病理生理、免疫学和药理学等多个领域。目的:本研究探讨ChatGPT在简化肾小球疾病术语以增强患者理解的同时是否能保持准确性。方法:使用GPT-4通过两个不同的查询对肾小球疾病相关的67个术语进行分析。一种是针对一般的解释,另一种是为受教育程度为8年级或更低的患者量身定制的。GPT-4的准确性评分从1(不正确)到5(正确和全面)。其可读性使用共识阅读等级(CRG)水平进行评估,该水平包含七个可读性指标,包括Flesch-Kincaid等级(FKG)和SMOG指数。Flesch Reading Ease (FRE)分数,范围从0到100,分数越高说明文本越容易阅读,也用于评估可读性。进行配对t检验以评估不同查询之间准确性和可读性水平的差异。结果:GPT-4对肾小球疾病术语的一般解释平均达到大学生可读水平,CRG评分14.1分,FKG评分13.9分。烟雾指数也显示了话题的复杂性,得分为11.8。当为阅读水平在8年级或以下的患者量身定制时,可读性得到改善,CRG评分平均为9.7,FKG评分为8.7,SMOG评分为7.3。FRE得分也表明可读性从一般解释的31.6分进一步提高到定制解释的63.5分。然而,GPT-4的定制解释的准确性显著低于一般解释(4.2±0.4比4.7±0.3,p <;0.0001)。结论:GPT-4虽然有效简化了肾小球疾病的信息,但在诊断过程中影响了信息的准确性。为了实施这些发现,我们建议在临床环境中进行试点研究,以评估患者的理解程度,利用不同群体的反馈来定制内容,扩大研究范围以提高人工智能的准确性并减少偏见,为医疗保健中的人工智能制定严格的道德准则,并与健康信息系统集成,为患者提供量身定制的教育内容。这种方法将促进ChatGPT等人工智能工具在患者教育中的有效和合乎道德的使用,使患者能够做出明智的健康决定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
274
审稿时长
3-8 weeks
期刊介绍: IJCP is a general medical journal. IJCP gives special priority to work that has international appeal. IJCP publishes: Editorials. IJCP Editorials are commissioned. [Peer reviewed at the editor''s discretion] Perspectives. Most IJCP Perspectives are commissioned. Example. [Peer reviewed at the editor''s discretion] Study design and interpretation. Example. [Always peer reviewed] Original data from clinical investigations. In particular: Primary research papers from RCTs, observational studies, epidemiological studies; pre-specified sub-analyses; pooled analyses. [Always peer reviewed] Meta-analyses. [Always peer reviewed] Systematic reviews. From October 2009, special priority will be given to systematic reviews. [Always peer reviewed] Non-systematic/narrative reviews. From October 2009, reviews that are not systematic will be considered only if they include a discrete Methods section that must explicitly describe the authors'' approach. Special priority will, however, be given to systematic reviews. [Always peer reviewed] ''How to…'' papers. Example. [Always peer reviewed] Consensus statements. [Always peer reviewed] Short reports. [Always peer reviewed] Letters. [Peer reviewed at the editor''s discretion] International scope IJCP publishes work from investigators globally. Around 30% of IJCP articles list an author from the UK. Around 30% of IJCP articles list an author from the USA or Canada. Around 45% of IJCP articles list an author from a European country that is not the UK. Around 15% of articles published in IJCP list an author from a country in the Asia-Pacific region.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信