{"title":"The effects of ChatGPT on patient education of knee osteoarthritis: a preliminary study of 60 cases.","authors":"Yuanmeng Yang, Junqing Lin, Jinshan Zhang","doi":"10.1097/JS9.0000000000002494","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGPT has not been sufficiently investigated.</p><p><strong>Methods: </strong>We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024, who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they had post-traumatic osteoarthritis and a history of knee surgery. Participants received physician education ( n = 18), free education with ChatGPT ( n = 21), or supervised education with ChatGPT ( n = 21) with a pre-defined outline (five questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.</p><p><strong>Results: </strong>Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a structured question outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT responses in the supervised education group gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education with ChatGPT group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with the physician education group (95.6 ± 5.3), while the free education with ChatGPT group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).</p><p><strong>Conclusion: </strong>Supervised education with ChatGPT using structured questions achieved comparable patient education outcomes to physician education in individuals with KOA. In contrast, free education with ChatGPT resulted in relatively lower knowledge levels and reduced answer quality, highlighting the need for caution in unsupervised artificial intelligence (AI) use. This study provides preliminary real-world evidence supporting the responsible use of AI tools like ChatGPT in patient education, particularly when guided by a pre-defined question outline.</p>","PeriodicalId":14401,"journal":{"name":"International journal of surgery","volume":" ","pages":"9753-9756"},"PeriodicalIF":10.1000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12695326/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/JS9.0000000000002494","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/23 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGPT has not been sufficiently investigated.
Methods: We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024, who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they had post-traumatic osteoarthritis and a history of knee surgery. Participants received physician education ( n = 18), free education with ChatGPT ( n = 21), or supervised education with ChatGPT ( n = 21) with a pre-defined outline (five questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.
Results: Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a structured question outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT responses in the supervised education group gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education with ChatGPT group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with the physician education group (95.6 ± 5.3), while the free education with ChatGPT group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).
Conclusion: Supervised education with ChatGPT using structured questions achieved comparable patient education outcomes to physician education in individuals with KOA. In contrast, free education with ChatGPT resulted in relatively lower knowledge levels and reduced answer quality, highlighting the need for caution in unsupervised artificial intelligence (AI) use. This study provides preliminary real-world evidence supporting the responsible use of AI tools like ChatGPT in patient education, particularly when guided by a pre-defined question outline.
期刊介绍:
The International Journal of Surgery (IJS) has a broad scope, encompassing all surgical specialties. Its primary objective is to facilitate the exchange of crucial ideas and lines of thought between and across these specialties.By doing so, the journal aims to counter the growing trend of increasing sub-specialization, which can result in "tunnel-vision" and the isolation of significant surgical advancements within specific specialties.