弥合脊柱护理中的健康素养差距:使用chatgpt - 40改进患者教育材料。

Joseph E Nassar,Michael J Farias,Lama A Ammar,Rhea Rasquinha,Andrew Y Xu,Manjot Singh,Daniel Alsoof,Bassel G Diebo,Alan H Daniels
{"title":"弥合脊柱护理中的健康素养差距:使用chatgpt - 40改进患者教育材料。","authors":"Joseph E Nassar,Michael J Farias,Lama A Ammar,Rhea Rasquinha,Andrew Y Xu,Manjot Singh,Daniel Alsoof,Bassel G Diebo,Alan H Daniels","doi":"10.2106/jbjs.24.01484","DOIUrl":null,"url":null,"abstract":"BACKGROUND\r\nPatient-education materials (PEMs) are essential to improve health literacy, engagement, and treatment adherence, yet many exceed the recommended readability levels. Therefore, individuals with limited health literacy are at a disadvantage. This study evaluated the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), the North American Spine Society (NASS), and the American Association of Neurological Surgeons (AANS), and examined the potential of artificial intelligence (AI) in optimizing PEMs for improved patient comprehension.\r\n\r\nMETHODS\r\nA total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index scores, as well as other metrics, including language complexity and use of the passive voice. ChatGPT-4o was used to revise the PEMs to a sixth-grade reading level, and post-revision readability was assessed. Test-retest reliability was evaluated, and paired t tests were used to compare the readability scores of the original and AI-modified PEMs.\r\n\r\nRESULTS\r\nThe original PEMs had a mean FKGL of 10.2 ± 2.6, which significantly exceeded both the recommended sixth-grade reading level and the average U.S. eighth-grade reading level (p < 0.05). ChatGPT-4o generated articles with a significantly reduced mean FKGL of 6.6 ± 1.3 (p < 0.05). ChatGPT-4o also improved other readability metrics, including the SMOG Index score, language complexity, and use of the passive voice, while maintaining accuracy and adequate detail. Excellent test-retest reliability was observed across all of the metrics (intraclass correlation coefficient [ICC] range, 0.91 to 0.98).\r\n\r\nCONCLUSIONS\r\nSpine-related PEMs from the AAOS, the NASS, and the AANS remain excessively complex, despite minor improvements to readability over the years. ChatGPT-4o demonstrated the potential to enhance PEM readability while maintaining content quality. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs to address diverse patient needs and improve health-care delivery.","PeriodicalId":22625,"journal":{"name":"The Journal of Bone & Joint Surgery","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging Health Literacy Gaps in Spine Care: Using ChatGPT-4o to Improve Patient-Education Materials.\",\"authors\":\"Joseph E Nassar,Michael J Farias,Lama A Ammar,Rhea Rasquinha,Andrew Y Xu,Manjot Singh,Daniel Alsoof,Bassel G Diebo,Alan H Daniels\",\"doi\":\"10.2106/jbjs.24.01484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"BACKGROUND\\r\\nPatient-education materials (PEMs) are essential to improve health literacy, engagement, and treatment adherence, yet many exceed the recommended readability levels. Therefore, individuals with limited health literacy are at a disadvantage. This study evaluated the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), the North American Spine Society (NASS), and the American Association of Neurological Surgeons (AANS), and examined the potential of artificial intelligence (AI) in optimizing PEMs for improved patient comprehension.\\r\\n\\r\\nMETHODS\\r\\nA total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index scores, as well as other metrics, including language complexity and use of the passive voice. ChatGPT-4o was used to revise the PEMs to a sixth-grade reading level, and post-revision readability was assessed. Test-retest reliability was evaluated, and paired t tests were used to compare the readability scores of the original and AI-modified PEMs.\\r\\n\\r\\nRESULTS\\r\\nThe original PEMs had a mean FKGL of 10.2 ± 2.6, which significantly exceeded both the recommended sixth-grade reading level and the average U.S. eighth-grade reading level (p < 0.05). ChatGPT-4o generated articles with a significantly reduced mean FKGL of 6.6 ± 1.3 (p < 0.05). ChatGPT-4o also improved other readability metrics, including the SMOG Index score, language complexity, and use of the passive voice, while maintaining accuracy and adequate detail. Excellent test-retest reliability was observed across all of the metrics (intraclass correlation coefficient [ICC] range, 0.91 to 0.98).\\r\\n\\r\\nCONCLUSIONS\\r\\nSpine-related PEMs from the AAOS, the NASS, and the AANS remain excessively complex, despite minor improvements to readability over the years. ChatGPT-4o demonstrated the potential to enhance PEM readability while maintaining content quality. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs to address diverse patient needs and improve health-care delivery.\",\"PeriodicalId\":22625,\"journal\":{\"name\":\"The Journal of Bone & Joint Surgery\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Bone & Joint Surgery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2106/jbjs.24.01484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Bone & Joint Surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2106/jbjs.24.01484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:患者教育材料(PEMs)对于提高健康素养、参与和治疗依从性至关重要,但许多材料超出了推荐的可读性水平。因此,卫生知识有限的个人处于不利地位。本研究评估了来自美国骨科学会(AAOS)、北美脊柱学会(NASS)和美国神经外科医师协会(AANS)的脊柱相关PEMs的可读性,并研究了人工智能(AI)在优化PEMs以提高患者理解方面的潜力。方法对来自AAOS、NASS和AANS网站的146份脊柱相关PEMs进行分析。可读性通过Flesch-Kincaid Grade Level (FKGL)和Simple Measure of Gobbledygook (SMOG) Index得分以及其他指标进行评估,包括语言复杂性和被动语态的使用。使用chatgpt - 40将PEMs修改为六年级阅读水平,并评估修改后的可读性。评估测试-重测信度,并使用配对t检验比较原始和人工智能修改的PEMs的可读性得分。结果初试学生的平均FKGL为10.2±2.6,显著高于推荐的六年级阅读水平和美国八年级阅读水平(p < 0.05)。chatgpt - 40生成的文章的平均FKGL显著降低,为6.6±1.3 (p < 0.05)。chatgpt - 40还提高了其他可读性指标,包括烟雾指数评分、语言复杂性和被动语态的使用,同时保持了准确性和足够的细节。所有指标的重测信度都很好(类内相关系数[ICC]范围为0.91至0.98)。结论:来自AAOS、NASS和AANS的脊柱相关PEMs仍然过于复杂,尽管多年来可读性略有改善。chatgpt - 40展示了在保持内容质量的同时提高PEM可读性的潜力。未来的努力应将人工智能工具与视觉辅助工具和用户友好平台结合起来,创建包容和可理解的临时医疗服务,以满足患者的不同需求并改善医疗服务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Bridging Health Literacy Gaps in Spine Care: Using ChatGPT-4o to Improve Patient-Education Materials.
BACKGROUND Patient-education materials (PEMs) are essential to improve health literacy, engagement, and treatment adherence, yet many exceed the recommended readability levels. Therefore, individuals with limited health literacy are at a disadvantage. This study evaluated the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), the North American Spine Society (NASS), and the American Association of Neurological Surgeons (AANS), and examined the potential of artificial intelligence (AI) in optimizing PEMs for improved patient comprehension. METHODS A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index scores, as well as other metrics, including language complexity and use of the passive voice. ChatGPT-4o was used to revise the PEMs to a sixth-grade reading level, and post-revision readability was assessed. Test-retest reliability was evaluated, and paired t tests were used to compare the readability scores of the original and AI-modified PEMs. RESULTS The original PEMs had a mean FKGL of 10.2 ± 2.6, which significantly exceeded both the recommended sixth-grade reading level and the average U.S. eighth-grade reading level (p < 0.05). ChatGPT-4o generated articles with a significantly reduced mean FKGL of 6.6 ± 1.3 (p < 0.05). ChatGPT-4o also improved other readability metrics, including the SMOG Index score, language complexity, and use of the passive voice, while maintaining accuracy and adequate detail. Excellent test-retest reliability was observed across all of the metrics (intraclass correlation coefficient [ICC] range, 0.91 to 0.98). CONCLUSIONS Spine-related PEMs from the AAOS, the NASS, and the AANS remain excessively complex, despite minor improvements to readability over the years. ChatGPT-4o demonstrated the potential to enhance PEM readability while maintaining content quality. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs to address diverse patient needs and improve health-care delivery.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信