{"title":"The effects of ChatGPT on patient education of knee osteoarthritis: a preliminary study of 60 cases.","authors":"Yuanmeng Yang, Junqing Lin, Jinshan Zhang","doi":"10.1097/JS9.0000000000002494","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGTP has not been sufficiently investigated.</p><p><strong>Methods: </strong>We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024 who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they post-traumatic osteoarthritis and history of knee surgery. Participants received physician education (n = 18), free education with ChatGPT (n = 21), or supervised education (n = 21) with ChatGPT with a pre-defined outline (5 questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.</p><p><strong>Results: </strong>Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a pre-defined outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with physician education group (95.6 ± 5.3) while the free education group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).</p><p><strong>Conclusion: </strong>Patient education by ChatGPT with pre-structured questions could achieve good patient education on KOA compared with patient education by physicians. Free patient education in the current stage should be cautious, considering the relative lower knowledge level and potential lower quality of answers.</p>","PeriodicalId":14401,"journal":{"name":"International journal of surgery","volume":" ","pages":""},"PeriodicalIF":12.5000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/JS9.0000000000002494","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGTP has not been sufficiently investigated.
Methods: We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024 who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they post-traumatic osteoarthritis and history of knee surgery. Participants received physician education (n = 18), free education with ChatGPT (n = 21), or supervised education (n = 21) with ChatGPT with a pre-defined outline (5 questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.
Results: Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a pre-defined outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with physician education group (95.6 ± 5.3) while the free education group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).
Conclusion: Patient education by ChatGPT with pre-structured questions could achieve good patient education on KOA compared with patient education by physicians. Free patient education in the current stage should be cautious, considering the relative lower knowledge level and potential lower quality of answers.
期刊介绍:
The International Journal of Surgery (IJS) has a broad scope, encompassing all surgical specialties. Its primary objective is to facilitate the exchange of crucial ideas and lines of thought between and across these specialties.By doing so, the journal aims to counter the growing trend of increasing sub-specialization, which can result in "tunnel-vision" and the isolation of significant surgical advancements within specific specialties.