Joseph Nassar BS , Michael Farias BS , Lama Ammar MSc , Rhea Rasquinha BS , Andrew Xu BA , Manjot Singh BS , Mohammad Daher BS , Krish Shah BS , Marco Kaper BA , Michelle Jun BS , Daniel Alsoof MBBS , Bassel G. Diebo MD , Alan H Daniels MD
{"title":"10. 弥合脊柱护理中的健康素养差距:使用ChatGPT-4改进患者教育材料","authors":"Joseph Nassar BS , Michael Farias BS , Lama Ammar MSc , Rhea Rasquinha BS , Andrew Xu BA , Manjot Singh BS , Mohammad Daher BS , Krish Shah BS , Marco Kaper BA , Michelle Jun BS , Daniel Alsoof MBBS , Bassel G. Diebo MD , Alan H Daniels MD","doi":"10.1016/j.spinee.2025.08.192","DOIUrl":null,"url":null,"abstract":"<div><h3>BACKGROUND CONTEXT</h3><div>Patient education materials (PEMs) are essential for improving health literacy, patient engagement, and treatment adherence. However, many exceed recommended readability levels, disadvantaging individuals with limited health literacy.</div></div><div><h3>PURPOSE</h3><div>To evaluate the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), North American Spine Society (NASS), and American Association of Neurological Surgeons (AANS) and examine the potential of artificial intelligence (AI) to optimize PEMs for better comprehension.</div></div><div><h3>STUDY DESIGN/SETTING</h3><div>Readability analysis of spine-related PEMs with AI-based optimization.</div></div><div><h3>PATIENT SAMPLE</h3><div>A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites.</div></div><div><h3>OUTCOME MEASURES</h3><div>Readability scores including Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index, linguistic complexity, passive voice use, and content accuracy.</div></div><div><h3>METHODS</h3><div>A total of 146 spine-related PEMs from AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the FKGL and SMOG Index scores, along with other linguistic metrics such as language complexity and passive voice use. ChatGPT-4.0 was utilized to revise PEMs to a 6th-grade reading level, and post-revision readability was reassessed. Test-retest reliability was evaluated, and paired t-tests compared the readability scores of original and AI-modified PEMs.</div></div><div><h3>RESULTS</h3><div>Original PEMs had a mean FKGL of 10.2±2.6, significantly exceeding both the recommended 6th-grade reading level and the US average 8thgrade reading level (p<0.05). AI-generated revisions significantly improved readability, reducing the mean FKGL to 6.6±1.3 (p<0.05). ChatGPT-4.0 also enhanced other readability metrics, including SMOG Index, language complexity, and passive voice use, while preserving accuracy and adequate detail. Excellent test-retest reliability was observed across all metrics (ICC range: 0.91–0.98).</div></div><div><h3>CONCLUSIONS</h3><div>Spine-related PEMs from AAOS, NASS, and AANS remain overly complex despite minor improvements over time. ChatGPT-4.0 demonstrates strong potential to enhance PEM accessibility while maintaining content integrity. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs, addressing diverse patient needs and improving healthcare delivery.</div></div><div><h3>FDA Device/Drug Status</h3><div>This abstract does not discuss or include any applicable devices or drugs.</div></div>","PeriodicalId":49484,"journal":{"name":"Spine Journal","volume":"25 11","pages":"Page S7"},"PeriodicalIF":4.7000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"10. Bridging health literacy gaps in spine care: using ChatGPT-4 to improve patient education materials\",\"authors\":\"Joseph Nassar BS , Michael Farias BS , Lama Ammar MSc , Rhea Rasquinha BS , Andrew Xu BA , Manjot Singh BS , Mohammad Daher BS , Krish Shah BS , Marco Kaper BA , Michelle Jun BS , Daniel Alsoof MBBS , Bassel G. Diebo MD , Alan H Daniels MD\",\"doi\":\"10.1016/j.spinee.2025.08.192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>BACKGROUND CONTEXT</h3><div>Patient education materials (PEMs) are essential for improving health literacy, patient engagement, and treatment adherence. However, many exceed recommended readability levels, disadvantaging individuals with limited health literacy.</div></div><div><h3>PURPOSE</h3><div>To evaluate the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), North American Spine Society (NASS), and American Association of Neurological Surgeons (AANS) and examine the potential of artificial intelligence (AI) to optimize PEMs for better comprehension.</div></div><div><h3>STUDY DESIGN/SETTING</h3><div>Readability analysis of spine-related PEMs with AI-based optimization.</div></div><div><h3>PATIENT SAMPLE</h3><div>A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites.</div></div><div><h3>OUTCOME MEASURES</h3><div>Readability scores including Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index, linguistic complexity, passive voice use, and content accuracy.</div></div><div><h3>METHODS</h3><div>A total of 146 spine-related PEMs from AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the FKGL and SMOG Index scores, along with other linguistic metrics such as language complexity and passive voice use. ChatGPT-4.0 was utilized to revise PEMs to a 6th-grade reading level, and post-revision readability was reassessed. Test-retest reliability was evaluated, and paired t-tests compared the readability scores of original and AI-modified PEMs.</div></div><div><h3>RESULTS</h3><div>Original PEMs had a mean FKGL of 10.2±2.6, significantly exceeding both the recommended 6th-grade reading level and the US average 8thgrade reading level (p<0.05). AI-generated revisions significantly improved readability, reducing the mean FKGL to 6.6±1.3 (p<0.05). ChatGPT-4.0 also enhanced other readability metrics, including SMOG Index, language complexity, and passive voice use, while preserving accuracy and adequate detail. Excellent test-retest reliability was observed across all metrics (ICC range: 0.91–0.98).</div></div><div><h3>CONCLUSIONS</h3><div>Spine-related PEMs from AAOS, NASS, and AANS remain overly complex despite minor improvements over time. ChatGPT-4.0 demonstrates strong potential to enhance PEM accessibility while maintaining content integrity. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs, addressing diverse patient needs and improving healthcare delivery.</div></div><div><h3>FDA Device/Drug Status</h3><div>This abstract does not discuss or include any applicable devices or drugs.</div></div>\",\"PeriodicalId\":49484,\"journal\":{\"name\":\"Spine Journal\",\"volume\":\"25 11\",\"pages\":\"Page S7\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Spine Journal\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1529943025005728\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spine Journal","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1529943025005728","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
10. Bridging health literacy gaps in spine care: using ChatGPT-4 to improve patient education materials
BACKGROUND CONTEXT
Patient education materials (PEMs) are essential for improving health literacy, patient engagement, and treatment adherence. However, many exceed recommended readability levels, disadvantaging individuals with limited health literacy.
PURPOSE
To evaluate the readability of spine-related PEMs from the American Academy of Orthopaedic Surgeons (AAOS), North American Spine Society (NASS), and American Association of Neurological Surgeons (AANS) and examine the potential of artificial intelligence (AI) to optimize PEMs for better comprehension.
STUDY DESIGN/SETTING
Readability analysis of spine-related PEMs with AI-based optimization.
PATIENT SAMPLE
A total of 146 spine-related PEMs from the AAOS, NASS, and AANS websites.
OUTCOME MEASURES
Readability scores including Flesch-Kincaid Grade Level (FKGL) and Simple Measure of Gobbledygook (SMOG) Index, linguistic complexity, passive voice use, and content accuracy.
METHODS
A total of 146 spine-related PEMs from AAOS, NASS, and AANS websites were analyzed. Readability was assessed using the FKGL and SMOG Index scores, along with other linguistic metrics such as language complexity and passive voice use. ChatGPT-4.0 was utilized to revise PEMs to a 6th-grade reading level, and post-revision readability was reassessed. Test-retest reliability was evaluated, and paired t-tests compared the readability scores of original and AI-modified PEMs.
RESULTS
Original PEMs had a mean FKGL of 10.2±2.6, significantly exceeding both the recommended 6th-grade reading level and the US average 8thgrade reading level (p<0.05). AI-generated revisions significantly improved readability, reducing the mean FKGL to 6.6±1.3 (p<0.05). ChatGPT-4.0 also enhanced other readability metrics, including SMOG Index, language complexity, and passive voice use, while preserving accuracy and adequate detail. Excellent test-retest reliability was observed across all metrics (ICC range: 0.91–0.98).
CONCLUSIONS
Spine-related PEMs from AAOS, NASS, and AANS remain overly complex despite minor improvements over time. ChatGPT-4.0 demonstrates strong potential to enhance PEM accessibility while maintaining content integrity. Future efforts should integrate AI tools with visual aids and user-friendly platforms to create inclusive and comprehensible PEMs, addressing diverse patient needs and improving healthcare delivery.
FDA Device/Drug Status
This abstract does not discuss or include any applicable devices or drugs.
期刊介绍:
The Spine Journal, the official journal of the North American Spine Society, is an international and multidisciplinary journal that publishes original, peer-reviewed articles on research and treatment related to the spine and spine care, including basic science and clinical investigations. It is a condition of publication that manuscripts submitted to The Spine Journal have not been published, and will not be simultaneously submitted or published elsewhere. The Spine Journal also publishes major reviews of specific topics by acknowledged authorities, technical notes, teaching editorials, and other special features, Letters to the Editor-in-Chief are encouraged.