Online and ChatGPT-generated patient education materials regarding brain tumor prognosis fail to meet readability standards

IF 1.9 4区 医学 Q3 CLINICAL NEUROLOGY
Ishav Y. Shukla, Matthew Z. Sun
{"title":"Online and ChatGPT-generated patient education materials regarding brain tumor prognosis fail to meet readability standards","authors":"Ishav Y. Shukla,&nbsp;Matthew Z. Sun","doi":"10.1016/j.jocn.2025.111410","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Online healthcare literature often exceeds the general population’s literacy level. Our study assesses the readability of online and ChatGPT-generated materials on glioblastomas, meningiomas, and pituitary adenomas, comparing readability by tumor type, institutional affiliation, authorship, and source (websites vs. ChatGPT).</div></div><div><h3>Methods</h3><div>This cross-sectional study involved a Google Chrome search (November 2024) using ‘prognosis of [tumor type],’ with the first 100 English-language, patient-directed results per tumor included. Websites were categorized by tumor, institutional affiliation (university vs. non-affiliated), and authorship (medical-professional reviewed vs. non-reviewed). ChatGPT 4.0 was queried with three standardized questions per tumor, based on the most prevalent content found in patient-facing websites. Five metrics were assessed: Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, and SMOG Index. Comparisons were conducted using Mann-Whitney U tests and t-tests.</div></div><div><h3>Results</h3><div>Zero websites and ChatGPT responses met the readability benchmarks of 6th grade or below (AMA guideline) or 8th grade or below (NIH guideline). Of the websites, 50.4 % were at a 9th–12th grade level, 47.9 % at an undergraduate level, and 1.7 % at a graduate level. Websites reviewed by medical professionals had higher FRE (p = 0.03) and lower CLI (p = 0.009) compared to non-reviewed websites. Among ChatGPT responses, 93.3 % were graduate level. ChatGPT responses had lower readability than websites across all metrics (p &lt; 0.001).</div></div><div><h3>Conclusion</h3><div>Online and ChatGPT-generated neuro-oncology materials exceed recommended readability standards, potentially hindering patients’ ability to make informed decisions. Future efforts should focus on standardizing readability guidelines, refining AI-generated content, incorporating professional oversight consistently, and improving the accessibility of online neuro-oncology materials.</div></div>","PeriodicalId":15487,"journal":{"name":"Journal of Clinical Neuroscience","volume":"138 ","pages":"Article 111410"},"PeriodicalIF":1.9000,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Clinical Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967586825003832","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

Online healthcare literature often exceeds the general population’s literacy level. Our study assesses the readability of online and ChatGPT-generated materials on glioblastomas, meningiomas, and pituitary adenomas, comparing readability by tumor type, institutional affiliation, authorship, and source (websites vs. ChatGPT).

Methods

This cross-sectional study involved a Google Chrome search (November 2024) using ‘prognosis of [tumor type],’ with the first 100 English-language, patient-directed results per tumor included. Websites were categorized by tumor, institutional affiliation (university vs. non-affiliated), and authorship (medical-professional reviewed vs. non-reviewed). ChatGPT 4.0 was queried with three standardized questions per tumor, based on the most prevalent content found in patient-facing websites. Five metrics were assessed: Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, and SMOG Index. Comparisons were conducted using Mann-Whitney U tests and t-tests.

Results

Zero websites and ChatGPT responses met the readability benchmarks of 6th grade or below (AMA guideline) or 8th grade or below (NIH guideline). Of the websites, 50.4 % were at a 9th–12th grade level, 47.9 % at an undergraduate level, and 1.7 % at a graduate level. Websites reviewed by medical professionals had higher FRE (p = 0.03) and lower CLI (p = 0.009) compared to non-reviewed websites. Among ChatGPT responses, 93.3 % were graduate level. ChatGPT responses had lower readability than websites across all metrics (p < 0.001).

Conclusion

Online and ChatGPT-generated neuro-oncology materials exceed recommended readability standards, potentially hindering patients’ ability to make informed decisions. Future efforts should focus on standardizing readability guidelines, refining AI-generated content, incorporating professional oversight consistently, and improving the accessibility of online neuro-oncology materials.
在线和chatgpt生成的关于脑肿瘤预后的患者教育材料不符合可读性标准
目的在线医疗文献往往超过一般人群的文化水平。我们的研究评估了在线和ChatGPT生成的胶质母细胞瘤、脑膜瘤和垂体腺瘤资料的可读性,比较了肿瘤类型、机构隶属关系、作者和来源(网站与ChatGPT)的可读性。这项横断面研究涉及谷歌Chrome搜索(2024年11月),使用“[肿瘤类型]的预后”,包括每个肿瘤的前100个英文,以患者为导向的结果。网站按肿瘤、隶属机构(大学与非隶属机构)和作者(医学专业评审与非评审)进行分类。ChatGPT 4.0基于面向患者的网站中最普遍的内容,对每个肿瘤进行了三个标准化问题的查询。评估了五个指标:Flesch Reading Ease, Flesch- kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index和SMOG Index。采用Mann-Whitney U检验和t检验进行比较。结果所有网站和ChatGPT回复均未达到6级及以下(AMA指南)或8级及以下(NIH指南)的可读性基准。在这些网站中,50.4%为9 - 12年级水平,47.9%为本科水平,1.7%为研究生水平。与未经点评的网站相比,经医疗专业人员点评的网站FRE较高(p = 0.03), CLI较低(p = 0.009)。在ChatGPT的回复中,93.3%是研究生水平。ChatGPT回复在所有指标上的可读性都低于网站(p <;0.001)。结论:在线和chatgpt生成的神经肿瘤学材料超过了推荐的可读性标准,潜在地阻碍了患者做出明智决定的能力。未来的努力应该集中在标准化可读性指南,完善人工智能生成的内容,始终如一地纳入专业监督,并提高在线神经肿瘤学材料的可访问性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Clinical Neuroscience
Journal of Clinical Neuroscience 医学-临床神经学
CiteScore
4.50
自引率
0.00%
发文量
402
审稿时长
40 days
期刊介绍: This International journal, Journal of Clinical Neuroscience, publishes articles on clinical neurosurgery and neurology and the related neurosciences such as neuro-pathology, neuro-radiology, neuro-ophthalmology and neuro-physiology. The journal has a broad International perspective, and emphasises the advances occurring in Asia, the Pacific Rim region, Europe and North America. The Journal acts as a focus for publication of major clinical and laboratory research, as well as publishing solicited manuscripts on specific subjects from experts, case reports and other information of interest to clinicians working in the clinical neurosciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信