The effects of ChatGPT on patient education of knee osteoarthritis: a preliminary study of 60 cases.

IF 12.5 2区 医学 Q1 SURGERY
Yuanmeng Yang, Junqing Lin, Jinshan Zhang
{"title":"The effects of ChatGPT on patient education of knee osteoarthritis: a preliminary study of 60 cases.","authors":"Yuanmeng Yang, Junqing Lin, Jinshan Zhang","doi":"10.1097/JS9.0000000000002494","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGTP has not been sufficiently investigated.</p><p><strong>Methods: </strong>We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024 who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they post-traumatic osteoarthritis and history of knee surgery. Participants received physician education (n = 18), free education with ChatGPT (n = 21), or supervised education (n = 21) with ChatGPT with a pre-defined outline (5 questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.</p><p><strong>Results: </strong>Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a pre-defined outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with physician education group (95.6 ± 5.3) while the free education group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).</p><p><strong>Conclusion: </strong>Patient education by ChatGPT with pre-structured questions could achieve good patient education on KOA compared with patient education by physicians. Free patient education in the current stage should be cautious, considering the relative lower knowledge level and potential lower quality of answers.</p>","PeriodicalId":14401,"journal":{"name":"International journal of surgery","volume":" ","pages":""},"PeriodicalIF":12.5000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/JS9.0000000000002494","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: ChatGPT powered by OpenAI is a large language model that offers a potential method for patient education. Whether patients with knee osteoarthritis (KOA) can benefit from patient education via ChatGTP has not been sufficiently investigated.

Methods: We enrolled 60 participants enrolled from 1 January 2024 to 1 September 2024 who had clinically diagnosed KOA for the first time. Participants were excluded from analyses if they post-traumatic osteoarthritis and history of knee surgery. Participants received physician education (n = 18), free education with ChatGPT (n = 21), or supervised education (n = 21) with ChatGPT with a pre-defined outline (5 questions for reference). The primary outcome was the physician-rated patient knowledge level on KOA measured by a visual analogue scale (VAS, 0-100 mm). We also evaluated all answers from ChatGPT via VAS rating.

Results: Patients receiving free education with ChatGPT asked substantially more questions compared with those patients who were given a pre-defined outline (17.0 ± 9.3 versus 10.3 ± 7.6, P < 0.001). With the outline given to patients, ChatGPT gave higher-quality answers compared with the answers from the group with free education (92.1 ± 4.3 versus 81.4 ± 10.4, P = 0.001). Finally, the supervised education group achieved similar education effect (knowledge level, 95.3 ± 4.7) compared with physician education group (95.6 ± 5.3) while the free education group had a substantially lower knowledge level (82.1 ± 12.3, P < 0.001).

Conclusion: Patient education by ChatGPT with pre-structured questions could achieve good patient education on KOA compared with patient education by physicians. Free patient education in the current stage should be cautious, considering the relative lower knowledge level and potential lower quality of answers.

ChatGPT在膝关节骨性关节炎患者教育中的作用:附60例初步研究。
背景:OpenAI支持的ChatGPT是一个大型语言模型,为患者教育提供了一种潜在的方法。膝关节骨性关节炎(KOA)患者是否能从ChatGTP的患者教育中获益还没有得到充分的研究。方法:我们从2024年1月1日至2024年9月1日招募了60名首次临床诊断为KOA的患者。如果参与者有创伤后骨关节炎和膝关节手术史,则被排除在分析之外。参与者接受了医师教育(n = 18), ChatGPT免费教育(n = 21),或ChatGPT监督教育(n = 21), ChatGPT有预先定义的大纲(5个问题作为参考)。主要结果是通过视觉模拟量表(VAS, 0-100 mm)测量医生评定的患者对KOA的知识水平。我们还通过VAS评分对ChatGPT的所有答案进行评估。结果:与预先设定大纲的患者相比,接受ChatGPT免费教育的患者提出了更多的问题(17.0±9.3 vs 10.3±7.6)。结论:与医生进行的患者教育相比,使用ChatGPT进行预先结构化问题的患者教育可以实现更好的患者KOA教育。现阶段的免费患者教育应该谨慎,考虑到相对较低的知识水平和可能较低的答案质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
17.70
自引率
3.30%
发文量
0
审稿时长
6-12 weeks
期刊介绍: The International Journal of Surgery (IJS) has a broad scope, encompassing all surgical specialties. Its primary objective is to facilitate the exchange of crucial ideas and lines of thought between and across these specialties.By doing so, the journal aims to counter the growing trend of increasing sub-specialization, which can result in "tunnel-vision" and the isolation of significant surgical advancements within specific specialties.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信