Improving Patient Understanding of Glomerular Disease Terms With ChatGPT

IF 2.2 4区 医学 Q2 MEDICINE, GENERAL & INTERNAL
Yasir H. Abdelgadir, Charat Thongprayoon, Iasmina M. Craici, Wisit Cheungpasitporn, Jing Miao
{"title":"Improving Patient Understanding of Glomerular Disease Terms With ChatGPT","authors":"Yasir H. Abdelgadir,&nbsp;Charat Thongprayoon,&nbsp;Iasmina M. Craici,&nbsp;Wisit Cheungpasitporn,&nbsp;Jing Miao","doi":"10.1155/ijcp/9977290","DOIUrl":null,"url":null,"abstract":"<div>\n <p><b>Background:</b> Glomerular disease is complex and difficult for patients to understand, as it involves various pathophysiology, immunology, and pharmacology areas.</p>\n <p><b>Objective:</b> This study explored whether ChatGPT can maintain accuracy while simplifying glomerular disease terms to enhance patient comprehension.</p>\n <p><b>Methods:</b> 67 terms related to glomerular disease were analyzed using GPT-4 through two distinct queries. One aimed at a general explanation and another tailored for patients with an education level of 8th grade or lower. GPT-4’s accuracy was scored from 1 (incorrect) to 5 (correct and comprehensive). Its readability was assessed using the Consensus Reading Grade (CRG) Level, which incorporates seven readability indices including the Flesch–Kincaid Grade (FKG) and SMOG indices. Flesch Reading Ease (FRE) score, ranging from 0 to 100 with higher scores indicating easier-to-read text, was also used to evaluate the readability. A paired <i>t</i>-test was conducted to assess differences in accuracy and readability levels between different queries.</p>\n <p><b>Results:</b> GPT-4’s general explanations of glomerular disease terms averaged at a college readability level, indicated by the CRG score of 14.1 and FKG score of 13.9. SMOG index also indicated the topic’s complexity, with a score of 11.8. When tailored for patients at or below an 8<sup>th</sup>-grade reading level, readability improved, averaging 9.7 by the CRG score, 8.7 by FKG score, and 7.3 by SMOG score. The FRE score also indicated a further improvement of readability from 31.6 for general explanations to 63.5 for tailored explanations. However, the accuracy in GPT-4’s tailored explanations was significantly lower than that in general explanations (4.2 ± 0.4 versus 4.7 ± 0.3, <i>p</i> &lt; 0.0001).</p>\n <p><b>Conclusion:</b> While GPT-4 effectively simplified information about glomerular diseases, it compromised its accuracy in the process. To implement these findings, we suggest pilot studies in clinical settings to assess patient understanding, using feedback from diverse groups to customize content, expanding research to enhance AI accuracy and reduce biases, setting strict ethical guidelines for AI in healthcare, and integrating with health informatics systems to provide tailored educational content to patients. This approach will promote effective and ethical use of AI tools like ChatGPT in patient education, empowering patients to make informed health decisions.</p>\n </div>","PeriodicalId":13782,"journal":{"name":"International Journal of Clinical Practice","volume":"2025 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/ijcp/9977290","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Clinical Practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/ijcp/9977290","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Glomerular disease is complex and difficult for patients to understand, as it involves various pathophysiology, immunology, and pharmacology areas.

Objective: This study explored whether ChatGPT can maintain accuracy while simplifying glomerular disease terms to enhance patient comprehension.

Methods: 67 terms related to glomerular disease were analyzed using GPT-4 through two distinct queries. One aimed at a general explanation and another tailored for patients with an education level of 8th grade or lower. GPT-4’s accuracy was scored from 1 (incorrect) to 5 (correct and comprehensive). Its readability was assessed using the Consensus Reading Grade (CRG) Level, which incorporates seven readability indices including the Flesch–Kincaid Grade (FKG) and SMOG indices. Flesch Reading Ease (FRE) score, ranging from 0 to 100 with higher scores indicating easier-to-read text, was also used to evaluate the readability. A paired t-test was conducted to assess differences in accuracy and readability levels between different queries.

Results: GPT-4’s general explanations of glomerular disease terms averaged at a college readability level, indicated by the CRG score of 14.1 and FKG score of 13.9. SMOG index also indicated the topic’s complexity, with a score of 11.8. When tailored for patients at or below an 8th-grade reading level, readability improved, averaging 9.7 by the CRG score, 8.7 by FKG score, and 7.3 by SMOG score. The FRE score also indicated a further improvement of readability from 31.6 for general explanations to 63.5 for tailored explanations. However, the accuracy in GPT-4’s tailored explanations was significantly lower than that in general explanations (4.2 ± 0.4 versus 4.7 ± 0.3, p < 0.0001).

Conclusion: While GPT-4 effectively simplified information about glomerular diseases, it compromised its accuracy in the process. To implement these findings, we suggest pilot studies in clinical settings to assess patient understanding, using feedback from diverse groups to customize content, expanding research to enhance AI accuracy and reduce biases, setting strict ethical guidelines for AI in healthcare, and integrating with health informatics systems to provide tailored educational content to patients. This approach will promote effective and ethical use of AI tools like ChatGPT in patient education, empowering patients to make informed health decisions.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
274
审稿时长
3-8 weeks
期刊介绍: IJCP is a general medical journal. IJCP gives special priority to work that has international appeal. IJCP publishes: Editorials. IJCP Editorials are commissioned. [Peer reviewed at the editor''s discretion] Perspectives. Most IJCP Perspectives are commissioned. Example. [Peer reviewed at the editor''s discretion] Study design and interpretation. Example. [Always peer reviewed] Original data from clinical investigations. In particular: Primary research papers from RCTs, observational studies, epidemiological studies; pre-specified sub-analyses; pooled analyses. [Always peer reviewed] Meta-analyses. [Always peer reviewed] Systematic reviews. From October 2009, special priority will be given to systematic reviews. [Always peer reviewed] Non-systematic/narrative reviews. From October 2009, reviews that are not systematic will be considered only if they include a discrete Methods section that must explicitly describe the authors'' approach. Special priority will, however, be given to systematic reviews. [Always peer reviewed] ''How to…'' papers. Example. [Always peer reviewed] Consensus statements. [Always peer reviewed] Short reports. [Always peer reviewed] Letters. [Peer reviewed at the editor''s discretion] International scope IJCP publishes work from investigators globally. Around 30% of IJCP articles list an author from the UK. Around 30% of IJCP articles list an author from the USA or Canada. Around 45% of IJCP articles list an author from a European country that is not the UK. Around 15% of articles published in IJCP list an author from a country in the Asia-Pacific region.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信