10. Bridging health literacy gaps in spine care: using ChatGPT-4 to improve patient education materials

IF 4.7 1区 医学 Q1 CLINICAL NEUROLOGY
Joseph Nassar BS , Michael Farias BS , Lama Ammar MSc , Rhea Rasquinha BS , Andrew Xu BA , Manjot Singh BS , Mohammad Daher BS , Krish Shah BS , Marco Kaper BA , Michelle Jun BS , Daniel Alsoof MBBS , Bassel G. Diebo MD , Alan H Daniels MD
{"title":"10. Bridging health literacy gaps in spine care: using ChatGPT-4 to improve patient education materials","authors":"Joseph Nassar BS ,&nbsp;Michael Farias BS ,&nbsp;Lama Ammar MSc ,&nbsp;Rhea Rasquinha BS ,&nbsp;Andrew Xu BA ,&nbsp;Manjot Singh BS ,&nbsp;Mohammad Daher BS ,&nbsp;Krish Shah BS ,&nbsp;Marco Kaper BA ,&nbsp;Michelle Jun BS ,&nbsp;Daniel Alsoof MBBS ,&nbsp;Bassel G. Diebo MD ,&nbsp;Alan H Daniels MD","doi":"10.1016/j.spinee.2025.08.192","DOIUrl":null,"url":null,"abstract":"<div><h3>BACKGROUND CONTEXT</h3><div>Patient education materials (PEMs) are essential for improving health literacy, patient engagement, and treatment adherence. However, many exceed recommended readability levels, disadvantaging individuals with limited health literacy.</div></div><div><h3>PURPOSE</h3><div>To evaluate the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), North American Spine Society (NASS), and American Association of Neurological Surgeons (AANS) and examine the potential of artificial intelligence (AI) to optimize PEMs for better comprehension.</div></div><div><h3>STUDY DESIGN/SETTING</h3><div>Readability analysis of spine-related PEMs with AI-based optimization.</div></div><div><h3>PATIENT SAMPLE</h3><div>A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites.</div></div><div><h3>OUTCOME MEASURES</h3><div>Readability scores including Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index, linguistic complexity, passive voice use, and content accuracy.</div></div><div><h3>METHODS</h3><div>A total of 146 spine-related PEMs from AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the FKGL and SMOG Index scores, along with other linguistic metrics such as language complexity and passive voice use. ChatGPT-4.0 was utilized to revise PEMs to a 6th-grade reading level, and post-revision readability was reassessed. Test-retest reliability was evaluated, and paired t-tests compared the readability scores of original and AI-modified PEMs.</div></div><div><h3>RESULTS</h3><div>Original PEMs had a mean FKGL of 10.2±2.6, significantly exceeding both the recommended 6th-grade reading level and the US average 8thgrade reading level (p&lt;0.05). AI-generated revisions significantly improved readability, reducing the mean FKGL to 6.6±1.3 (p&lt;0.05). ChatGPT-4.0 also enhanced other readability metrics, including SMOG Index, language complexity, and passive voice use, while preserving accuracy and adequate detail. Excellent test-retest reliability was observed across all metrics (ICC range: 0.91–0.98).</div></div><div><h3>CONCLUSIONS</h3><div>Spine-related PEMs from AAOS, NASS, and AANS remain overly complex despite minor improvements over time. ChatGPT-4.0 demonstrates strong potential to enhance PEM accessibility while maintaining content integrity. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs, addressing diverse patient needs and improving healthcare delivery.</div></div><div><h3>FDA Device/Drug Status</h3><div>This abstract does not discuss or include any applicable devices or drugs.</div></div>","PeriodicalId":49484,"journal":{"name":"Spine Journal","volume":"25 11","pages":"Page S7"},"PeriodicalIF":4.7000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spine Journal","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1529943025005728","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

BACKGROUND CONTEXT

Patient education materials (PEMs) are essential for improving health literacy, patient engagement, and treatment adherence. However, many exceed recommended readability levels, disadvantaging individuals with limited health literacy.

PURPOSE

To evaluate the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), North American Spine Society (NASS), and American Association of Neurological Surgeons (AANS) and examine the potential of artificial intelligence (AI) to optimize PEMs for better comprehension.

STUDY DESIGN/SETTING

Readability analysis of spine-related PEMs with AI-based optimization.

PATIENT SAMPLE

A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites.

OUTCOME MEASURES

Readability scores including Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index, linguistic complexity, passive voice use, and content accuracy.

METHODS

A total of 146 spine-related PEMs from AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the FKGL and SMOG Index scores, along with other linguistic metrics such as language complexity and passive voice use. ChatGPT-4.0 was utilized to revise PEMs to a 6th-grade reading level, and post-revision readability was reassessed. Test-retest reliability was evaluated, and paired t-tests compared the readability scores of original and AI-modified PEMs.

RESULTS

Original PEMs had a mean FKGL of 10.2±2.6, significantly exceeding both the recommended 6th-grade reading level and the US average 8thgrade reading level (p<0.05). AI-generated revisions significantly improved readability, reducing the mean FKGL to 6.6±1.3 (p<0.05). ChatGPT-4.0 also enhanced other readability metrics, including SMOG Index, language complexity, and passive voice use, while preserving accuracy and adequate detail. Excellent test-retest reliability was observed across all metrics (ICC range: 0.91–0.98).

CONCLUSIONS

Spine-related PEMs from AAOS, NASS, and AANS remain overly complex despite minor improvements over time. ChatGPT-4.0 demonstrates strong potential to enhance PEM accessibility while maintaining content integrity. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs, addressing diverse patient needs and improving healthcare delivery.

FDA Device/Drug Status

This abstract does not discuss or include any applicable devices or drugs.
10. 弥合脊柱护理中的健康素养差距:使用ChatGPT-4改进患者教育材料
患者教育材料(PEMs)对于提高健康素养、患者参与和治疗依从性至关重要。然而,许多超过了建议的可读性水平,对卫生知识有限的个人不利。目的评估来自美国骨科学会(AAOS)、北美脊柱学会(NASS)和美国神经外科医师协会(AANS)的脊柱相关PEMs的可读性,并研究人工智能(AI)优化PEMs的潜力,以更好地理解PEMs。研究设计/设置基于人工智能优化的脊柱相关PEMs的可读性分析患者从AAOS、NASS和AANS网站上共收集了146份与脊柱相关的pms。结果测量:可读性评分包括flesch - kinkaid Grade Level (FKGL)和简单测量的Gobbledygook (SMOG)指数、语言复杂性、被动语态使用和内容准确性。方法对来自AAOS、NASS和AANS网站的146份与脊柱相关的PEMs进行分析。可读性的评估使用了FKGL和SMOG指数得分,以及其他语言指标,如语言复杂性和被动语态的使用。使用ChatGPT-4.0将PEMs修改为6年级阅读水平,并重新评估修订后的可读性。评估了重测信度,配对t检验比较了原始和人工智能修改的PEMs的可读性得分。结果原始受试者的平均FKGL为10.2±2.6,显著超过推荐的六年级阅读水平和美国八年级平均阅读水平(p<0.05)。人工智能生成的修订显著提高了可读性,将平均FKGL降低到6.6±1.3 (p<0.05)。ChatGPT-4.0还增强了其他可读性指标,包括烟雾指数、语言复杂性和被动语态的使用,同时保持了准确性和足够的细节。在所有度量标准(ICC范围:0.91-0.98)中观察到极好的重测信度。结论:尽管随着时间的推移略有改善,但AAOS、NASS和AANS的脊柱相关PEMs仍然过于复杂。ChatGPT-4.0展示了在保持内容完整性的同时增强PEM可访问性的强大潜力。未来的努力应将人工智能工具与视觉辅助工具和用户友好平台相结合,以创建包容性和可理解的PEMs,满足不同的患者需求并改善医疗保健服务。FDA器械/药物状态本摘要不讨论或包括任何适用的器械或药物。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Spine Journal
Spine Journal 医学-临床神经学
CiteScore
8.20
自引率
6.70%
发文量
680
审稿时长
13.1 weeks
期刊介绍: The Spine Journal, the official journal of the North American Spine Society, is an international and multidisciplinary journal that publishes original, peer-reviewed articles on research and treatment related to the spine and spine care, including basic science and clinical investigations. It is a condition of publication that manuscripts submitted to The Spine Journal have not been published, and will not be simultaneously submitted or published elsewhere. The Spine Journal also publishes major reviews of specific topics by acknowledged authorities, technical notes, teaching editorials, and other special features, Letters to the Editor-in-Chief are encouraged.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信